00:00:00.000 Started by upstream project "autotest-per-patch" build number 132823 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.096 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.165 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.262 > git --version # 'git version 2.39.2' 00:00:00.262 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.049 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.063 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.078 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.078 > git config core.sparsecheckout # timeout=10 00:00:07.092 > git read-tree -mu HEAD # timeout=10 00:00:07.110 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.139 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.139 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.304 [Pipeline] Start of Pipeline 00:00:07.315 [Pipeline] library 00:00:07.317 Loading library shm_lib@master 00:00:07.317 Library shm_lib@master is cached. Copying from home. 00:00:07.328 [Pipeline] node 00:00:07.339 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.342 [Pipeline] { 00:00:07.349 [Pipeline] catchError 00:00:07.350 [Pipeline] { 00:00:07.359 [Pipeline] wrap 00:00:07.365 [Pipeline] { 00:00:07.369 [Pipeline] stage 00:00:07.371 [Pipeline] { (Prologue) 00:00:07.618 [Pipeline] sh 00:00:07.898 + logger -p user.info -t JENKINS-CI 00:00:07.911 [Pipeline] echo 00:00:07.912 Node: WFP3 00:00:07.918 [Pipeline] sh 00:00:08.211 [Pipeline] setCustomBuildProperty 00:00:08.222 [Pipeline] echo 00:00:08.224 Cleanup processes 00:00:08.236 [Pipeline] sh 00:00:08.520 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.520 4039627 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.532 [Pipeline] sh 00:00:08.814 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.814 ++ grep -v 'sudo pgrep' 00:00:08.814 ++ awk '{print $1}' 00:00:08.814 + sudo kill -9 00:00:08.814 + true 00:00:08.827 [Pipeline] cleanWs 00:00:08.836 [WS-CLEANUP] Deleting project workspace... 00:00:08.836 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.843 [WS-CLEANUP] done 00:00:08.847 [Pipeline] setCustomBuildProperty 00:00:08.859 [Pipeline] sh 00:00:09.140 + sudo git config --global --replace-all safe.directory '*' 00:00:09.217 [Pipeline] httpRequest 00:00:11.252 [Pipeline] echo 00:00:11.255 Sorcerer 10.211.164.112 is alive 00:00:11.264 [Pipeline] retry 00:00:11.266 [Pipeline] { 00:00:11.277 [Pipeline] httpRequest 00:00:11.285 HttpMethod: GET 00:00:11.285 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.286 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.310 Response Code: HTTP/1.1 200 OK 00:00:11.310 Success: Status code 200 is in the accepted range: 200,404 00:00:11.310 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.628 [Pipeline] } 00:00:23.646 [Pipeline] // retry 00:00:23.654 [Pipeline] sh 00:00:23.940 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.955 [Pipeline] httpRequest 00:00:24.382 [Pipeline] echo 00:00:24.384 Sorcerer 10.211.164.112 is alive 00:00:24.394 [Pipeline] retry 00:00:24.396 [Pipeline] { 00:00:24.410 [Pipeline] httpRequest 00:00:24.415 HttpMethod: GET 00:00:24.415 URL: http://10.211.164.112/packages/spdk_4fb5f9881288fa423aeb210bd3effeae4ee4652e.tar.gz 00:00:24.416 Sending request to url: http://10.211.164.112/packages/spdk_4fb5f9881288fa423aeb210bd3effeae4ee4652e.tar.gz 00:00:24.421 Response Code: HTTP/1.1 200 OK 00:00:24.422 Success: Status code 200 is in the accepted range: 200,404 00:00:24.422 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4fb5f9881288fa423aeb210bd3effeae4ee4652e.tar.gz 00:03:11.663 [Pipeline] } 00:03:11.681 [Pipeline] // retry 00:03:11.688 [Pipeline] sh 00:03:11.974 + tar --no-same-owner -xf spdk_4fb5f9881288fa423aeb210bd3effeae4ee4652e.tar.gz 00:03:14.521 [Pipeline] sh 00:03:14.806 + git -C spdk log --oneline -n5 00:03:14.806 4fb5f9881 nvme/rdma: Register UMR per IO request 00:03:14.806 0edc184ec accel/mlx5: Support mkey registration 00:03:14.806 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:03:14.806 1ae735a5d nvme: add poll_group interrupt callback 00:03:14.806 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:03:14.816 [Pipeline] } 00:03:14.830 [Pipeline] // stage 00:03:14.839 [Pipeline] stage 00:03:14.842 [Pipeline] { (Prepare) 00:03:14.858 [Pipeline] writeFile 00:03:14.873 [Pipeline] sh 00:03:15.163 + logger -p user.info -t JENKINS-CI 00:03:15.174 [Pipeline] sh 00:03:15.451 + logger -p user.info -t JENKINS-CI 00:03:15.463 [Pipeline] sh 00:03:15.747 + cat autorun-spdk.conf 00:03:15.747 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.747 SPDK_TEST_NVMF=1 00:03:15.747 SPDK_TEST_NVME_CLI=1 00:03:15.747 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.747 SPDK_TEST_NVMF_NICS=e810 00:03:15.747 SPDK_TEST_VFIOUSER=1 00:03:15.747 SPDK_RUN_UBSAN=1 00:03:15.747 NET_TYPE=phy 00:03:15.754 RUN_NIGHTLY=0 00:03:15.758 [Pipeline] readFile 00:03:15.781 [Pipeline] withEnv 00:03:15.783 [Pipeline] { 00:03:15.795 [Pipeline] sh 00:03:16.083 + set -ex 00:03:16.083 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:16.083 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.083 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.083 ++ SPDK_TEST_NVMF=1 00:03:16.083 ++ SPDK_TEST_NVME_CLI=1 00:03:16.083 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.083 ++ SPDK_TEST_NVMF_NICS=e810 00:03:16.083 ++ SPDK_TEST_VFIOUSER=1 00:03:16.083 ++ SPDK_RUN_UBSAN=1 00:03:16.083 ++ NET_TYPE=phy 00:03:16.083 ++ RUN_NIGHTLY=0 00:03:16.083 + case $SPDK_TEST_NVMF_NICS in 00:03:16.083 + DRIVERS=ice 00:03:16.083 + [[ tcp == \r\d\m\a ]] 00:03:16.083 + [[ -n ice ]] 00:03:16.083 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:16.083 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:16.083 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:16.083 rmmod: ERROR: Module i40iw is not currently loaded 00:03:16.083 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:16.083 + true 00:03:16.083 + for D in $DRIVERS 00:03:16.083 + sudo modprobe ice 00:03:16.083 + exit 0 00:03:16.093 [Pipeline] } 00:03:16.108 [Pipeline] // withEnv 00:03:16.114 [Pipeline] } 00:03:16.128 [Pipeline] // stage 00:03:16.137 [Pipeline] catchError 00:03:16.138 [Pipeline] { 00:03:16.151 [Pipeline] timeout 00:03:16.151 Timeout set to expire in 1 hr 0 min 00:03:16.153 [Pipeline] { 00:03:16.166 [Pipeline] stage 00:03:16.168 [Pipeline] { (Tests) 00:03:16.183 [Pipeline] sh 00:03:16.468 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:16.468 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:16.468 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:16.468 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:16.468 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.468 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:16.468 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:16.468 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:16.468 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:16.468 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:16.468 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:16.468 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:16.468 + source /etc/os-release 00:03:16.468 ++ NAME='Fedora Linux' 00:03:16.468 ++ VERSION='39 (Cloud Edition)' 00:03:16.468 ++ ID=fedora 00:03:16.468 ++ VERSION_ID=39 00:03:16.468 ++ VERSION_CODENAME= 00:03:16.468 ++ PLATFORM_ID=platform:f39 00:03:16.468 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:16.468 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:16.468 ++ LOGO=fedora-logo-icon 00:03:16.468 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:16.468 ++ HOME_URL=https://fedoraproject.org/ 00:03:16.468 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:16.468 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:16.468 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:16.468 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:16.468 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:16.468 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:16.468 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:16.468 ++ SUPPORT_END=2024-11-12 00:03:16.468 ++ VARIANT='Cloud Edition' 00:03:16.468 ++ VARIANT_ID=cloud 00:03:16.468 + uname -a 00:03:16.468 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:03:16.468 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.759 Hugepages 00:03:19.759 node hugesize free / total 00:03:19.759 node0 1048576kB 0 / 0 00:03:19.759 node0 2048kB 0 / 0 00:03:19.759 node1 1048576kB 0 / 0 00:03:19.759 node1 2048kB 0 / 0 00:03:19.759 00:03:19.759 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.759 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:19.759 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:19.759 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:19.759 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:19.759 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:19.759 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:19.759 + rm -f /tmp/spdk-ld-path 00:03:19.759 + source autorun-spdk.conf 00:03:19.759 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.759 ++ SPDK_TEST_NVMF=1 00:03:19.759 ++ SPDK_TEST_NVME_CLI=1 00:03:19.759 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.759 ++ SPDK_TEST_NVMF_NICS=e810 00:03:19.759 ++ SPDK_TEST_VFIOUSER=1 00:03:19.759 ++ SPDK_RUN_UBSAN=1 00:03:19.759 ++ NET_TYPE=phy 00:03:19.759 ++ RUN_NIGHTLY=0 00:03:19.759 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:19.759 + [[ -n '' ]] 00:03:19.759 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:19.759 + for M in /var/spdk/build-*-manifest.txt 00:03:19.759 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:19.759 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.759 + for M in /var/spdk/build-*-manifest.txt 00:03:19.759 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:19.759 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.759 + for M in /var/spdk/build-*-manifest.txt 00:03:19.759 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:19.759 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.759 ++ uname 00:03:19.759 + [[ Linux == \L\i\n\u\x ]] 00:03:19.759 + sudo dmesg -T 00:03:19.759 + sudo dmesg --clear 00:03:19.759 + dmesg_pid=4041229 00:03:19.759 + [[ Fedora Linux == FreeBSD ]] 00:03:19.759 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:19.759 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:19.759 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:19.759 + [[ -x /usr/src/fio-static/fio ]] 00:03:19.759 + export FIO_BIN=/usr/src/fio-static/fio 00:03:19.759 + FIO_BIN=/usr/src/fio-static/fio 00:03:19.759 + sudo dmesg -Tw 00:03:19.759 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:19.759 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:19.759 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:19.759 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:19.759 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:19.759 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:19.759 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:19.759 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:19.759 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:19.759 05:28:37 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:19.760 05:28:37 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:19.760 05:28:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:19.760 05:28:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:19.760 05:28:37 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:19.760 05:28:37 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:19.760 05:28:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:19.760 05:28:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:19.760 05:28:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:19.760 05:28:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.760 05:28:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.760 05:28:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.760 05:28:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.760 05:28:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.760 05:28:37 -- paths/export.sh@5 -- $ export PATH 00:03:19.760 05:28:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.760 05:28:37 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:19.760 05:28:37 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:19.760 05:28:37 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733804917.XXXXXX 00:03:19.760 05:28:37 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733804917.bs3YSJ 00:03:19.760 05:28:37 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:19.760 05:28:37 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:19.760 05:28:37 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:19.760 05:28:37 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:19.760 05:28:37 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:19.760 05:28:37 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:19.760 05:28:37 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:19.760 05:28:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.760 05:28:37 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:19.760 05:28:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:19.760 05:28:37 -- pm/common@17 -- $ local monitor 00:03:19.760 05:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.760 05:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.760 05:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.760 05:28:37 -- pm/common@21 -- $ date +%s 00:03:19.760 05:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.760 05:28:37 -- pm/common@21 -- $ date +%s 00:03:19.760 05:28:37 -- pm/common@25 -- $ sleep 1 00:03:19.760 05:28:37 -- pm/common@21 -- $ date +%s 00:03:19.760 05:28:37 -- pm/common@21 -- $ date +%s 00:03:19.760 05:28:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804917 00:03:19.760 05:28:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804917 00:03:19.760 05:28:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804917 00:03:19.760 05:28:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804917 00:03:20.020 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804917_collect-vmstat.pm.log 00:03:20.020 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804917_collect-cpu-load.pm.log 00:03:20.020 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804917_collect-cpu-temp.pm.log 00:03:20.020 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804917_collect-bmc-pm.bmc.pm.log 00:03:20.957 05:28:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:20.957 05:28:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:20.957 05:28:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:20.957 05:28:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.957 05:28:38 -- spdk/autobuild.sh@16 -- $ date -u 00:03:20.957 Tue Dec 10 04:28:38 AM UTC 2024 00:03:20.957 05:28:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:20.957 v25.01-pre-323-g4fb5f9881 00:03:20.957 05:28:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:20.957 05:28:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:20.957 05:28:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:20.957 05:28:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:20.957 05:28:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:20.957 05:28:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.957 ************************************ 00:03:20.957 START TEST ubsan 00:03:20.957 ************************************ 00:03:20.957 05:28:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:20.957 using ubsan 00:03:20.957 00:03:20.957 real 0m0.000s 00:03:20.957 user 0m0.000s 00:03:20.957 sys 0m0.000s 00:03:20.957 05:28:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:20.957 05:28:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:20.957 ************************************ 00:03:20.957 END TEST ubsan 00:03:20.957 ************************************ 00:03:20.957 05:28:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:20.957 05:28:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:20.957 05:28:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:20.957 05:28:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:20.957 05:28:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:20.957 05:28:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:20.957 05:28:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:20.957 05:28:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:20.957 05:28:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:21.216 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:21.216 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:21.476 Using 'verbs' RDMA provider 00:03:34.628 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:46.839 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:46.839 Creating mk/config.mk...done. 00:03:46.839 Creating mk/cc.flags.mk...done. 00:03:46.839 Type 'make' to build. 00:03:46.839 05:29:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:46.839 05:29:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:46.839 05:29:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:46.839 05:29:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.839 ************************************ 00:03:46.839 START TEST make 00:03:46.839 ************************************ 00:03:46.839 05:29:04 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:46.839 make[1]: Nothing to be done for 'all'. 00:03:48.224 The Meson build system 00:03:48.224 Version: 1.5.0 00:03:48.224 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:48.224 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:48.224 Build type: native build 00:03:48.224 Project name: libvfio-user 00:03:48.224 Project version: 0.0.1 00:03:48.224 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:48.224 C linker for the host machine: cc ld.bfd 2.40-14 00:03:48.224 Host machine cpu family: x86_64 00:03:48.224 Host machine cpu: x86_64 00:03:48.224 Run-time dependency threads found: YES 00:03:48.224 Library dl found: YES 00:03:48.224 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:48.224 Run-time dependency json-c found: YES 0.17 00:03:48.224 Run-time dependency cmocka found: YES 1.1.7 00:03:48.224 Program pytest-3 found: NO 00:03:48.224 Program flake8 found: NO 00:03:48.224 Program misspell-fixer found: NO 00:03:48.224 Program restructuredtext-lint found: NO 00:03:48.224 Program valgrind found: YES (/usr/bin/valgrind) 00:03:48.224 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:48.224 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:48.224 Compiler for C supports arguments -Wwrite-strings: YES 00:03:48.224 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:48.224 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:48.224 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:48.224 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:48.224 Build targets in project: 8 00:03:48.224 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:48.224 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:48.224 00:03:48.224 libvfio-user 0.0.1 00:03:48.224 00:03:48.224 User defined options 00:03:48.224 buildtype : debug 00:03:48.224 default_library: shared 00:03:48.224 libdir : /usr/local/lib 00:03:48.224 00:03:48.224 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:48.791 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:49.048 [1/37] Compiling C object samples/null.p/null.c.o 00:03:49.049 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:49.049 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:49.049 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:49.049 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:49.049 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:49.049 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:49.049 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:49.049 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:49.049 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:49.049 [11/37] Compiling C object samples/server.p/server.c.o 00:03:49.049 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:49.049 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:49.049 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:49.049 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:49.049 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:49.049 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:49.049 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:49.049 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:49.049 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:49.049 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:49.049 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:49.049 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:49.049 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:49.049 [25/37] Compiling C object samples/client.p/client.c.o 00:03:49.049 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:49.049 [27/37] Linking target samples/client 00:03:49.049 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:49.049 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:49.049 [30/37] Linking target test/unit_tests 00:03:49.307 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:49.307 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:49.307 [33/37] Linking target samples/gpio-pci-idio-16 00:03:49.307 [34/37] Linking target samples/server 00:03:49.307 [35/37] Linking target samples/null 00:03:49.307 [36/37] Linking target samples/lspci 00:03:49.307 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:49.307 INFO: autodetecting backend as ninja 00:03:49.307 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.307 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.874 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:49.874 ninja: no work to do. 00:03:55.149 The Meson build system 00:03:55.149 Version: 1.5.0 00:03:55.149 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:55.149 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:55.149 Build type: native build 00:03:55.149 Program cat found: YES (/usr/bin/cat) 00:03:55.149 Project name: DPDK 00:03:55.149 Project version: 24.03.0 00:03:55.149 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:55.149 C linker for the host machine: cc ld.bfd 2.40-14 00:03:55.149 Host machine cpu family: x86_64 00:03:55.149 Host machine cpu: x86_64 00:03:55.149 Message: ## Building in Developer Mode ## 00:03:55.149 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:55.149 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:55.149 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:55.149 Program python3 found: YES (/usr/bin/python3) 00:03:55.149 Program cat found: YES (/usr/bin/cat) 00:03:55.149 Compiler for C supports arguments -march=native: YES 00:03:55.149 Checking for size of "void *" : 8 00:03:55.149 Checking for size of "void *" : 8 (cached) 00:03:55.149 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:55.149 Library m found: YES 00:03:55.149 Library numa found: YES 00:03:55.149 Has header "numaif.h" : YES 00:03:55.149 Library fdt found: NO 00:03:55.149 Library execinfo found: NO 00:03:55.149 Has header "execinfo.h" : YES 00:03:55.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:55.149 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:55.149 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:55.149 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:55.149 Run-time dependency openssl found: YES 3.1.1 00:03:55.149 Run-time dependency libpcap found: YES 1.10.4 00:03:55.149 Has header "pcap.h" with dependency libpcap: YES 00:03:55.149 Compiler for C supports arguments -Wcast-qual: YES 00:03:55.149 Compiler for C supports arguments -Wdeprecated: YES 00:03:55.149 Compiler for C supports arguments -Wformat: YES 00:03:55.149 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:55.149 Compiler for C supports arguments -Wformat-security: NO 00:03:55.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:55.149 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:55.149 Compiler for C supports arguments -Wnested-externs: YES 00:03:55.149 Compiler for C supports arguments -Wold-style-definition: YES 00:03:55.149 Compiler for C supports arguments -Wpointer-arith: YES 00:03:55.149 Compiler for C supports arguments -Wsign-compare: YES 00:03:55.149 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:55.149 Compiler for C supports arguments -Wundef: YES 00:03:55.149 Compiler for C supports arguments -Wwrite-strings: YES 00:03:55.149 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:55.149 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:55.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:55.149 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:55.149 Program objdump found: YES (/usr/bin/objdump) 00:03:55.149 Compiler for C supports arguments -mavx512f: YES 00:03:55.149 Checking if "AVX512 checking" compiles: YES 00:03:55.149 Fetching value of define "__SSE4_2__" : 1 00:03:55.149 Fetching value of define "__AES__" : 1 00:03:55.149 Fetching value of define "__AVX__" : 1 00:03:55.149 Fetching value of define "__AVX2__" : 1 00:03:55.149 Fetching value of define "__AVX512BW__" : 1 00:03:55.149 Fetching value of define "__AVX512CD__" : 1 00:03:55.149 Fetching value of define "__AVX512DQ__" : 1 00:03:55.149 Fetching value of define "__AVX512F__" : 1 00:03:55.149 Fetching value of define "__AVX512VL__" : 1 00:03:55.149 Fetching value of define "__PCLMUL__" : 1 00:03:55.149 Fetching value of define "__RDRND__" : 1 00:03:55.149 Fetching value of define "__RDSEED__" : 1 00:03:55.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:55.149 Fetching value of define "__znver1__" : (undefined) 00:03:55.149 Fetching value of define "__znver2__" : (undefined) 00:03:55.149 Fetching value of define "__znver3__" : (undefined) 00:03:55.149 Fetching value of define "__znver4__" : (undefined) 00:03:55.149 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:55.149 Message: lib/log: Defining dependency "log" 00:03:55.149 Message: lib/kvargs: Defining dependency "kvargs" 00:03:55.149 Message: lib/telemetry: Defining dependency "telemetry" 00:03:55.149 Checking for function "getentropy" : NO 00:03:55.149 Message: lib/eal: Defining dependency "eal" 00:03:55.149 Message: lib/ring: Defining dependency "ring" 00:03:55.149 Message: lib/rcu: Defining dependency "rcu" 00:03:55.149 Message: lib/mempool: Defining dependency "mempool" 00:03:55.149 Message: lib/mbuf: Defining dependency "mbuf" 00:03:55.149 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:55.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:55.149 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:55.149 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:55.149 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:55.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:55.149 Compiler for C supports arguments -mpclmul: YES 00:03:55.149 Compiler for C supports arguments -maes: YES 00:03:55.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:55.149 Compiler for C supports arguments -mavx512bw: YES 00:03:55.149 Compiler for C supports arguments -mavx512dq: YES 00:03:55.149 Compiler for C supports arguments -mavx512vl: YES 00:03:55.149 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:55.149 Compiler for C supports arguments -mavx2: YES 00:03:55.149 Compiler for C supports arguments -mavx: YES 00:03:55.149 Message: lib/net: Defining dependency "net" 00:03:55.149 Message: lib/meter: Defining dependency "meter" 00:03:55.149 Message: lib/ethdev: Defining dependency "ethdev" 00:03:55.149 Message: lib/pci: Defining dependency "pci" 00:03:55.149 Message: lib/cmdline: Defining dependency "cmdline" 00:03:55.149 Message: lib/hash: Defining dependency "hash" 00:03:55.149 Message: lib/timer: Defining dependency "timer" 00:03:55.149 Message: lib/compressdev: Defining dependency "compressdev" 00:03:55.150 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:55.150 Message: lib/dmadev: Defining dependency "dmadev" 00:03:55.150 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:55.150 Message: lib/power: Defining dependency "power" 00:03:55.150 Message: lib/reorder: Defining dependency "reorder" 00:03:55.150 Message: lib/security: Defining dependency "security" 00:03:55.150 Has header "linux/userfaultfd.h" : YES 00:03:55.150 Has header "linux/vduse.h" : YES 00:03:55.150 Message: lib/vhost: Defining dependency "vhost" 00:03:55.150 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:55.150 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:55.150 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:55.150 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:55.150 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:55.150 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:55.150 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:55.150 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:55.150 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:55.150 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:55.150 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:55.150 Configuring doxy-api-html.conf using configuration 00:03:55.150 Configuring doxy-api-man.conf using configuration 00:03:55.150 Program mandb found: YES (/usr/bin/mandb) 00:03:55.150 Program sphinx-build found: NO 00:03:55.150 Configuring rte_build_config.h using configuration 00:03:55.150 Message: 00:03:55.150 ================= 00:03:55.150 Applications Enabled 00:03:55.150 ================= 00:03:55.150 00:03:55.150 apps: 00:03:55.150 00:03:55.150 00:03:55.150 Message: 00:03:55.150 ================= 00:03:55.150 Libraries Enabled 00:03:55.150 ================= 00:03:55.150 00:03:55.150 libs: 00:03:55.150 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:55.150 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:55.150 cryptodev, dmadev, power, reorder, security, vhost, 00:03:55.150 00:03:55.150 Message: 00:03:55.150 =============== 00:03:55.150 Drivers Enabled 00:03:55.150 =============== 00:03:55.150 00:03:55.150 common: 00:03:55.150 00:03:55.150 bus: 00:03:55.150 pci, vdev, 00:03:55.150 mempool: 00:03:55.150 ring, 00:03:55.150 dma: 00:03:55.150 00:03:55.150 net: 00:03:55.150 00:03:55.150 crypto: 00:03:55.150 00:03:55.150 compress: 00:03:55.150 00:03:55.150 vdpa: 00:03:55.150 00:03:55.150 00:03:55.150 Message: 00:03:55.150 ================= 00:03:55.150 Content Skipped 00:03:55.150 ================= 00:03:55.150 00:03:55.150 apps: 00:03:55.150 dumpcap: explicitly disabled via build config 00:03:55.150 graph: explicitly disabled via build config 00:03:55.150 pdump: explicitly disabled via build config 00:03:55.150 proc-info: explicitly disabled via build config 00:03:55.150 test-acl: explicitly disabled via build config 00:03:55.150 test-bbdev: explicitly disabled via build config 00:03:55.150 test-cmdline: explicitly disabled via build config 00:03:55.150 test-compress-perf: explicitly disabled via build config 00:03:55.150 test-crypto-perf: explicitly disabled via build config 00:03:55.150 test-dma-perf: explicitly disabled via build config 00:03:55.150 test-eventdev: explicitly disabled via build config 00:03:55.150 test-fib: explicitly disabled via build config 00:03:55.150 test-flow-perf: explicitly disabled via build config 00:03:55.150 test-gpudev: explicitly disabled via build config 00:03:55.150 test-mldev: explicitly disabled via build config 00:03:55.150 test-pipeline: explicitly disabled via build config 00:03:55.150 test-pmd: explicitly disabled via build config 00:03:55.150 test-regex: explicitly disabled via build config 00:03:55.150 test-sad: explicitly disabled via build config 00:03:55.150 test-security-perf: explicitly disabled via build config 00:03:55.150 00:03:55.150 libs: 00:03:55.150 argparse: explicitly disabled via build config 00:03:55.150 metrics: explicitly disabled via build config 00:03:55.150 acl: explicitly disabled via build config 00:03:55.150 bbdev: explicitly disabled via build config 00:03:55.150 bitratestats: explicitly disabled via build config 00:03:55.150 bpf: explicitly disabled via build config 00:03:55.150 cfgfile: explicitly disabled via build config 00:03:55.150 distributor: explicitly disabled via build config 00:03:55.150 efd: explicitly disabled via build config 00:03:55.150 eventdev: explicitly disabled via build config 00:03:55.150 dispatcher: explicitly disabled via build config 00:03:55.150 gpudev: explicitly disabled via build config 00:03:55.150 gro: explicitly disabled via build config 00:03:55.150 gso: explicitly disabled via build config 00:03:55.150 ip_frag: explicitly disabled via build config 00:03:55.150 jobstats: explicitly disabled via build config 00:03:55.150 latencystats: explicitly disabled via build config 00:03:55.150 lpm: explicitly disabled via build config 00:03:55.150 member: explicitly disabled via build config 00:03:55.150 pcapng: explicitly disabled via build config 00:03:55.150 rawdev: explicitly disabled via build config 00:03:55.150 regexdev: explicitly disabled via build config 00:03:55.150 mldev: explicitly disabled via build config 00:03:55.150 rib: explicitly disabled via build config 00:03:55.150 sched: explicitly disabled via build config 00:03:55.150 stack: explicitly disabled via build config 00:03:55.150 ipsec: explicitly disabled via build config 00:03:55.150 pdcp: explicitly disabled via build config 00:03:55.150 fib: explicitly disabled via build config 00:03:55.150 port: explicitly disabled via build config 00:03:55.150 pdump: explicitly disabled via build config 00:03:55.150 table: explicitly disabled via build config 00:03:55.150 pipeline: explicitly disabled via build config 00:03:55.150 graph: explicitly disabled via build config 00:03:55.150 node: explicitly disabled via build config 00:03:55.150 00:03:55.150 drivers: 00:03:55.150 common/cpt: not in enabled drivers build config 00:03:55.150 common/dpaax: not in enabled drivers build config 00:03:55.150 common/iavf: not in enabled drivers build config 00:03:55.150 common/idpf: not in enabled drivers build config 00:03:55.150 common/ionic: not in enabled drivers build config 00:03:55.150 common/mvep: not in enabled drivers build config 00:03:55.150 common/octeontx: not in enabled drivers build config 00:03:55.150 bus/auxiliary: not in enabled drivers build config 00:03:55.150 bus/cdx: not in enabled drivers build config 00:03:55.150 bus/dpaa: not in enabled drivers build config 00:03:55.150 bus/fslmc: not in enabled drivers build config 00:03:55.150 bus/ifpga: not in enabled drivers build config 00:03:55.150 bus/platform: not in enabled drivers build config 00:03:55.150 bus/uacce: not in enabled drivers build config 00:03:55.150 bus/vmbus: not in enabled drivers build config 00:03:55.150 common/cnxk: not in enabled drivers build config 00:03:55.150 common/mlx5: not in enabled drivers build config 00:03:55.150 common/nfp: not in enabled drivers build config 00:03:55.150 common/nitrox: not in enabled drivers build config 00:03:55.150 common/qat: not in enabled drivers build config 00:03:55.150 common/sfc_efx: not in enabled drivers build config 00:03:55.150 mempool/bucket: not in enabled drivers build config 00:03:55.150 mempool/cnxk: not in enabled drivers build config 00:03:55.150 mempool/dpaa: not in enabled drivers build config 00:03:55.150 mempool/dpaa2: not in enabled drivers build config 00:03:55.150 mempool/octeontx: not in enabled drivers build config 00:03:55.150 mempool/stack: not in enabled drivers build config 00:03:55.150 dma/cnxk: not in enabled drivers build config 00:03:55.150 dma/dpaa: not in enabled drivers build config 00:03:55.150 dma/dpaa2: not in enabled drivers build config 00:03:55.150 dma/hisilicon: not in enabled drivers build config 00:03:55.150 dma/idxd: not in enabled drivers build config 00:03:55.150 dma/ioat: not in enabled drivers build config 00:03:55.150 dma/skeleton: not in enabled drivers build config 00:03:55.150 net/af_packet: not in enabled drivers build config 00:03:55.150 net/af_xdp: not in enabled drivers build config 00:03:55.150 net/ark: not in enabled drivers build config 00:03:55.150 net/atlantic: not in enabled drivers build config 00:03:55.150 net/avp: not in enabled drivers build config 00:03:55.150 net/axgbe: not in enabled drivers build config 00:03:55.150 net/bnx2x: not in enabled drivers build config 00:03:55.150 net/bnxt: not in enabled drivers build config 00:03:55.150 net/bonding: not in enabled drivers build config 00:03:55.150 net/cnxk: not in enabled drivers build config 00:03:55.150 net/cpfl: not in enabled drivers build config 00:03:55.150 net/cxgbe: not in enabled drivers build config 00:03:55.150 net/dpaa: not in enabled drivers build config 00:03:55.150 net/dpaa2: not in enabled drivers build config 00:03:55.150 net/e1000: not in enabled drivers build config 00:03:55.150 net/ena: not in enabled drivers build config 00:03:55.150 net/enetc: not in enabled drivers build config 00:03:55.150 net/enetfec: not in enabled drivers build config 00:03:55.150 net/enic: not in enabled drivers build config 00:03:55.150 net/failsafe: not in enabled drivers build config 00:03:55.150 net/fm10k: not in enabled drivers build config 00:03:55.150 net/gve: not in enabled drivers build config 00:03:55.150 net/hinic: not in enabled drivers build config 00:03:55.150 net/hns3: not in enabled drivers build config 00:03:55.150 net/i40e: not in enabled drivers build config 00:03:55.150 net/iavf: not in enabled drivers build config 00:03:55.150 net/ice: not in enabled drivers build config 00:03:55.150 net/idpf: not in enabled drivers build config 00:03:55.150 net/igc: not in enabled drivers build config 00:03:55.150 net/ionic: not in enabled drivers build config 00:03:55.150 net/ipn3ke: not in enabled drivers build config 00:03:55.150 net/ixgbe: not in enabled drivers build config 00:03:55.150 net/mana: not in enabled drivers build config 00:03:55.150 net/memif: not in enabled drivers build config 00:03:55.150 net/mlx4: not in enabled drivers build config 00:03:55.150 net/mlx5: not in enabled drivers build config 00:03:55.150 net/mvneta: not in enabled drivers build config 00:03:55.150 net/mvpp2: not in enabled drivers build config 00:03:55.150 net/netvsc: not in enabled drivers build config 00:03:55.150 net/nfb: not in enabled drivers build config 00:03:55.150 net/nfp: not in enabled drivers build config 00:03:55.150 net/ngbe: not in enabled drivers build config 00:03:55.150 net/null: not in enabled drivers build config 00:03:55.150 net/octeontx: not in enabled drivers build config 00:03:55.150 net/octeon_ep: not in enabled drivers build config 00:03:55.150 net/pcap: not in enabled drivers build config 00:03:55.151 net/pfe: not in enabled drivers build config 00:03:55.151 net/qede: not in enabled drivers build config 00:03:55.151 net/ring: not in enabled drivers build config 00:03:55.151 net/sfc: not in enabled drivers build config 00:03:55.151 net/softnic: not in enabled drivers build config 00:03:55.151 net/tap: not in enabled drivers build config 00:03:55.151 net/thunderx: not in enabled drivers build config 00:03:55.151 net/txgbe: not in enabled drivers build config 00:03:55.151 net/vdev_netvsc: not in enabled drivers build config 00:03:55.151 net/vhost: not in enabled drivers build config 00:03:55.151 net/virtio: not in enabled drivers build config 00:03:55.151 net/vmxnet3: not in enabled drivers build config 00:03:55.151 raw/*: missing internal dependency, "rawdev" 00:03:55.151 crypto/armv8: not in enabled drivers build config 00:03:55.151 crypto/bcmfs: not in enabled drivers build config 00:03:55.151 crypto/caam_jr: not in enabled drivers build config 00:03:55.151 crypto/ccp: not in enabled drivers build config 00:03:55.151 crypto/cnxk: not in enabled drivers build config 00:03:55.151 crypto/dpaa_sec: not in enabled drivers build config 00:03:55.151 crypto/dpaa2_sec: not in enabled drivers build config 00:03:55.151 crypto/ipsec_mb: not in enabled drivers build config 00:03:55.151 crypto/mlx5: not in enabled drivers build config 00:03:55.151 crypto/mvsam: not in enabled drivers build config 00:03:55.151 crypto/nitrox: not in enabled drivers build config 00:03:55.151 crypto/null: not in enabled drivers build config 00:03:55.151 crypto/octeontx: not in enabled drivers build config 00:03:55.151 crypto/openssl: not in enabled drivers build config 00:03:55.151 crypto/scheduler: not in enabled drivers build config 00:03:55.151 crypto/uadk: not in enabled drivers build config 00:03:55.151 crypto/virtio: not in enabled drivers build config 00:03:55.151 compress/isal: not in enabled drivers build config 00:03:55.151 compress/mlx5: not in enabled drivers build config 00:03:55.151 compress/nitrox: not in enabled drivers build config 00:03:55.151 compress/octeontx: not in enabled drivers build config 00:03:55.151 compress/zlib: not in enabled drivers build config 00:03:55.151 regex/*: missing internal dependency, "regexdev" 00:03:55.151 ml/*: missing internal dependency, "mldev" 00:03:55.151 vdpa/ifc: not in enabled drivers build config 00:03:55.151 vdpa/mlx5: not in enabled drivers build config 00:03:55.151 vdpa/nfp: not in enabled drivers build config 00:03:55.151 vdpa/sfc: not in enabled drivers build config 00:03:55.151 event/*: missing internal dependency, "eventdev" 00:03:55.151 baseband/*: missing internal dependency, "bbdev" 00:03:55.151 gpu/*: missing internal dependency, "gpudev" 00:03:55.151 00:03:55.151 00:03:55.151 Build targets in project: 85 00:03:55.151 00:03:55.151 DPDK 24.03.0 00:03:55.151 00:03:55.151 User defined options 00:03:55.151 buildtype : debug 00:03:55.151 default_library : shared 00:03:55.151 libdir : lib 00:03:55.151 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:55.151 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:55.151 c_link_args : 00:03:55.151 cpu_instruction_set: native 00:03:55.151 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:55.151 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:55.151 enable_docs : false 00:03:55.151 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:55.151 enable_kmods : false 00:03:55.151 max_lcores : 128 00:03:55.151 tests : false 00:03:55.151 00:03:55.151 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:55.410 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:55.676 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:55.676 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:55.676 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:55.676 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:55.676 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:55.676 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:55.676 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:55.676 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:55.676 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:55.676 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:55.676 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:55.676 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:55.677 [13/268] Linking static target lib/librte_kvargs.a 00:03:55.677 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:55.677 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:55.677 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:55.677 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:55.677 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:55.939 [19/268] Linking static target lib/librte_log.a 00:03:55.939 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:55.939 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:55.939 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:55.939 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:55.939 [24/268] Linking static target lib/librte_pci.a 00:03:55.939 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:56.203 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:56.203 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:56.203 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:56.203 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:56.203 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:56.203 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:56.203 [32/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:56.203 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:56.203 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:56.203 [35/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:56.203 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:56.203 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:56.203 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:56.203 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:56.203 [40/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:56.203 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:56.203 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:56.203 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:56.203 [44/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:56.203 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:56.203 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:56.203 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:56.203 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:56.203 [49/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:56.203 [50/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:56.203 [51/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:56.203 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:56.203 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:56.203 [54/268] Linking static target lib/librte_meter.a 00:03:56.203 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:56.203 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:56.203 [57/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:56.203 [58/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:56.203 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:56.203 [60/268] Linking static target lib/librte_ring.a 00:03:56.203 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:56.203 [62/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:56.203 [63/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:56.203 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:56.203 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:56.203 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:56.203 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:56.203 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:56.203 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:56.203 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:56.203 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:56.203 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:56.462 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:56.462 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:56.462 [75/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:56.462 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:56.462 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:56.462 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:56.462 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:56.462 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:56.462 [81/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.462 [82/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:56.462 [83/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:56.462 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:56.462 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:56.462 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:56.462 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:56.462 [88/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:56.462 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:56.462 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:56.462 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:56.462 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:56.462 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:56.462 [94/268] Linking static target lib/librte_telemetry.a 00:03:56.462 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:56.462 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:56.462 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:56.462 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:56.462 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:56.462 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:56.462 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:56.462 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:56.462 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:56.462 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:56.462 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.462 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:56.462 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:56.462 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:56.462 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:56.462 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:56.462 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:56.462 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:56.462 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:56.462 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:56.462 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:56.462 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:56.462 [117/268] Linking static target lib/librte_net.a 00:03:56.462 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:56.462 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:56.463 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:56.463 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:56.463 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:56.463 [123/268] Linking static target lib/librte_eal.a 00:03:56.463 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:56.463 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:56.463 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:56.463 [127/268] Linking static target lib/librte_mempool.a 00:03:56.463 [128/268] Linking static target lib/librte_cmdline.a 00:03:56.463 [129/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.463 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:56.463 [131/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:56.463 [132/268] Linking static target lib/librte_rcu.a 00:03:56.722 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:56.722 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:56.722 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:56.722 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.722 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.722 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:56.722 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:56.722 [140/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:56.722 [141/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:56.722 [142/268] Linking static target lib/librte_mbuf.a 00:03:56.722 [143/268] Linking static target lib/librte_timer.a 00:03:56.722 [144/268] Linking target lib/librte_log.so.24.1 00:03:56.722 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:56.722 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:56.722 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:56.722 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:56.722 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:56.723 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:56.723 [151/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:56.723 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:56.723 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:56.723 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:56.723 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.723 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:56.723 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:56.723 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:56.723 [159/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:56.723 [160/268] Linking static target lib/librte_compressdev.a 00:03:56.723 [161/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:56.723 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:56.723 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:56.723 [164/268] Linking static target lib/librte_dmadev.a 00:03:56.723 [165/268] Linking target lib/librte_kvargs.so.24.1 00:03:56.982 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:56.982 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:56.982 [168/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.982 [169/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:56.982 [170/268] Linking static target lib/librte_reorder.a 00:03:56.982 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:56.982 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:56.982 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:56.982 [174/268] Linking target lib/librte_telemetry.so.24.1 00:03:56.982 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:56.982 [176/268] Linking static target lib/librte_power.a 00:03:56.982 [177/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.982 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:56.982 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:56.982 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:56.982 [181/268] Linking static target lib/librte_security.a 00:03:56.982 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:56.982 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:56.982 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:56.982 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:56.982 [186/268] Linking static target lib/librte_hash.a 00:03:56.982 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:56.982 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:56.982 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:56.982 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:56.982 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:56.982 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:56.982 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:56.982 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:56.982 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:56.982 [196/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.982 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:56.982 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:57.246 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:57.246 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:57.246 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:57.246 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:57.246 [203/268] Linking static target drivers/librte_bus_vdev.a 00:03:57.246 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:57.246 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.246 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:57.246 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.246 [208/268] Linking static target drivers/librte_mempool_ring.a 00:03:57.246 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:57.246 [210/268] Linking static target lib/librte_cryptodev.a 00:03:57.246 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.246 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.246 [213/268] Linking static target drivers/librte_bus_pci.a 00:03:57.246 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.505 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.505 [216/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.505 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.505 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.505 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.505 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:57.505 [221/268] Linking static target lib/librte_ethdev.a 00:03:57.505 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.764 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.764 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.764 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:57.764 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.022 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.957 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:58.957 [229/268] Linking static target lib/librte_vhost.a 00:03:59.216 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.117 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.385 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.385 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.644 [234/268] Linking target lib/librte_eal.so.24.1 00:04:06.644 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:06.644 [236/268] Linking target lib/librte_dmadev.so.24.1 00:04:06.644 [237/268] Linking target lib/librte_ring.so.24.1 00:04:06.644 [238/268] Linking target lib/librte_timer.so.24.1 00:04:06.644 [239/268] Linking target lib/librte_pci.so.24.1 00:04:06.644 [240/268] Linking target lib/librte_meter.so.24.1 00:04:06.644 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:06.904 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:06.904 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:06.904 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:06.904 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:06.904 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:06.904 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:06.904 [248/268] Linking target lib/librte_rcu.so.24.1 00:04:06.904 [249/268] Linking target lib/librte_mempool.so.24.1 00:04:06.904 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:07.162 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:07.162 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:07.162 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:07.162 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:07.162 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:07.162 [256/268] Linking target lib/librte_net.so.24.1 00:04:07.162 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:07.162 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:07.420 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:07.420 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:07.420 [261/268] Linking target lib/librte_cmdline.so.24.1 00:04:07.420 [262/268] Linking target lib/librte_hash.so.24.1 00:04:07.420 [263/268] Linking target lib/librte_security.so.24.1 00:04:07.420 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:07.679 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:07.679 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:07.679 [267/268] Linking target lib/librte_power.so.24.1 00:04:07.679 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:07.679 INFO: autodetecting backend as ninja 00:04:07.679 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:17.656 CC lib/log/log.o 00:04:17.656 CC lib/log/log_flags.o 00:04:17.656 CC lib/log/log_deprecated.o 00:04:17.656 CC lib/ut_mock/mock.o 00:04:17.656 CC lib/ut/ut.o 00:04:17.656 LIB libspdk_ut_mock.a 00:04:17.656 LIB libspdk_log.a 00:04:17.656 LIB libspdk_ut.a 00:04:17.656 SO libspdk_ut_mock.so.6.0 00:04:17.656 SO libspdk_log.so.7.1 00:04:17.656 SO libspdk_ut.so.2.0 00:04:17.656 SYMLINK libspdk_ut_mock.so 00:04:17.656 SYMLINK libspdk_log.so 00:04:17.915 SYMLINK libspdk_ut.so 00:04:18.174 CC lib/dma/dma.o 00:04:18.174 CC lib/ioat/ioat.o 00:04:18.174 CC lib/util/base64.o 00:04:18.174 CC lib/util/bit_array.o 00:04:18.174 CXX lib/trace_parser/trace.o 00:04:18.174 CC lib/util/cpuset.o 00:04:18.174 CC lib/util/crc16.o 00:04:18.174 CC lib/util/crc32.o 00:04:18.174 CC lib/util/crc32c.o 00:04:18.174 CC lib/util/crc32_ieee.o 00:04:18.174 CC lib/util/crc64.o 00:04:18.174 CC lib/util/dif.o 00:04:18.174 CC lib/util/fd.o 00:04:18.174 CC lib/util/fd_group.o 00:04:18.174 CC lib/util/file.o 00:04:18.174 CC lib/util/hexlify.o 00:04:18.174 CC lib/util/iov.o 00:04:18.174 CC lib/util/math.o 00:04:18.174 CC lib/util/net.o 00:04:18.174 CC lib/util/pipe.o 00:04:18.174 CC lib/util/strerror_tls.o 00:04:18.174 CC lib/util/string.o 00:04:18.174 CC lib/util/uuid.o 00:04:18.174 CC lib/util/xor.o 00:04:18.174 CC lib/util/zipf.o 00:04:18.174 CC lib/util/md5.o 00:04:18.174 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.174 CC lib/vfio_user/host/vfio_user.o 00:04:18.432 LIB libspdk_dma.a 00:04:18.432 SO libspdk_dma.so.5.0 00:04:18.432 LIB libspdk_ioat.a 00:04:18.432 SYMLINK libspdk_dma.so 00:04:18.432 SO libspdk_ioat.so.7.0 00:04:18.432 SYMLINK libspdk_ioat.so 00:04:18.432 LIB libspdk_vfio_user.a 00:04:18.432 SO libspdk_vfio_user.so.5.0 00:04:18.690 SYMLINK libspdk_vfio_user.so 00:04:18.690 LIB libspdk_util.a 00:04:18.690 SO libspdk_util.so.10.1 00:04:18.690 SYMLINK libspdk_util.so 00:04:18.949 LIB libspdk_trace_parser.a 00:04:18.949 SO libspdk_trace_parser.so.6.0 00:04:18.949 SYMLINK libspdk_trace_parser.so 00:04:18.949 CC lib/vmd/vmd.o 00:04:18.949 CC lib/vmd/led.o 00:04:18.949 CC lib/rdma_utils/rdma_utils.o 00:04:18.949 CC lib/conf/conf.o 00:04:19.208 CC lib/env_dpdk/env.o 00:04:19.208 CC lib/json/json_parse.o 00:04:19.208 CC lib/env_dpdk/memory.o 00:04:19.208 CC lib/json/json_util.o 00:04:19.208 CC lib/idxd/idxd.o 00:04:19.208 CC lib/json/json_write.o 00:04:19.208 CC lib/env_dpdk/pci.o 00:04:19.208 CC lib/env_dpdk/threads.o 00:04:19.208 CC lib/idxd/idxd_kernel.o 00:04:19.208 CC lib/env_dpdk/pci_ioat.o 00:04:19.208 CC lib/idxd/idxd_user.o 00:04:19.208 CC lib/env_dpdk/init.o 00:04:19.208 CC lib/env_dpdk/pci_virtio.o 00:04:19.208 CC lib/env_dpdk/pci_vmd.o 00:04:19.208 CC lib/env_dpdk/pci_idxd.o 00:04:19.208 CC lib/env_dpdk/pci_event.o 00:04:19.208 CC lib/env_dpdk/sigbus_handler.o 00:04:19.208 CC lib/env_dpdk/pci_dpdk.o 00:04:19.208 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.208 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.208 LIB libspdk_conf.a 00:04:19.208 SO libspdk_conf.so.6.0 00:04:19.466 LIB libspdk_rdma_utils.a 00:04:19.466 SO libspdk_rdma_utils.so.1.0 00:04:19.466 SYMLINK libspdk_conf.so 00:04:19.466 LIB libspdk_json.a 00:04:19.466 SYMLINK libspdk_rdma_utils.so 00:04:19.466 SO libspdk_json.so.6.0 00:04:19.466 SYMLINK libspdk_json.so 00:04:19.466 LIB libspdk_idxd.a 00:04:19.725 SO libspdk_idxd.so.12.1 00:04:19.725 LIB libspdk_vmd.a 00:04:19.725 SO libspdk_vmd.so.6.0 00:04:19.725 SYMLINK libspdk_idxd.so 00:04:19.725 CC lib/rdma_provider/common.o 00:04:19.725 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.725 SYMLINK libspdk_vmd.so 00:04:19.725 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.725 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.725 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.725 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.025 LIB libspdk_rdma_provider.a 00:04:20.025 SO libspdk_rdma_provider.so.7.0 00:04:20.025 SYMLINK libspdk_rdma_provider.so 00:04:20.025 LIB libspdk_jsonrpc.a 00:04:20.025 SO libspdk_jsonrpc.so.6.0 00:04:20.025 SYMLINK libspdk_jsonrpc.so 00:04:20.327 LIB libspdk_env_dpdk.a 00:04:20.327 SO libspdk_env_dpdk.so.15.1 00:04:20.327 SYMLINK libspdk_env_dpdk.so 00:04:20.327 CC lib/rpc/rpc.o 00:04:20.608 LIB libspdk_rpc.a 00:04:20.608 SO libspdk_rpc.so.6.0 00:04:20.608 SYMLINK libspdk_rpc.so 00:04:20.891 CC lib/notify/notify.o 00:04:20.891 CC lib/notify/notify_rpc.o 00:04:20.891 CC lib/trace/trace.o 00:04:20.891 CC lib/trace/trace_flags.o 00:04:20.891 CC lib/keyring/keyring.o 00:04:20.891 CC lib/trace/trace_rpc.o 00:04:20.891 CC lib/keyring/keyring_rpc.o 00:04:21.150 LIB libspdk_notify.a 00:04:21.150 SO libspdk_notify.so.6.0 00:04:21.150 LIB libspdk_keyring.a 00:04:21.150 SYMLINK libspdk_notify.so 00:04:21.150 LIB libspdk_trace.a 00:04:21.150 SO libspdk_keyring.so.2.0 00:04:21.409 SO libspdk_trace.so.11.0 00:04:21.409 SYMLINK libspdk_keyring.so 00:04:21.409 SYMLINK libspdk_trace.so 00:04:21.668 CC lib/thread/thread.o 00:04:21.668 CC lib/sock/sock.o 00:04:21.668 CC lib/thread/iobuf.o 00:04:21.668 CC lib/sock/sock_rpc.o 00:04:21.926 LIB libspdk_sock.a 00:04:21.926 SO libspdk_sock.so.10.0 00:04:22.185 SYMLINK libspdk_sock.so 00:04:22.442 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.442 CC lib/nvme/nvme_ctrlr.o 00:04:22.442 CC lib/nvme/nvme_fabric.o 00:04:22.442 CC lib/nvme/nvme_ns_cmd.o 00:04:22.442 CC lib/nvme/nvme_ns.o 00:04:22.442 CC lib/nvme/nvme_pcie_common.o 00:04:22.442 CC lib/nvme/nvme_pcie.o 00:04:22.442 CC lib/nvme/nvme_qpair.o 00:04:22.442 CC lib/nvme/nvme.o 00:04:22.442 CC lib/nvme/nvme_quirks.o 00:04:22.442 CC lib/nvme/nvme_transport.o 00:04:22.442 CC lib/nvme/nvme_discovery.o 00:04:22.442 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.442 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.442 CC lib/nvme/nvme_tcp.o 00:04:22.442 CC lib/nvme/nvme_opal.o 00:04:22.442 CC lib/nvme/nvme_io_msg.o 00:04:22.442 CC lib/nvme/nvme_poll_group.o 00:04:22.442 CC lib/nvme/nvme_zns.o 00:04:22.442 CC lib/nvme/nvme_stubs.o 00:04:22.442 CC lib/nvme/nvme_auth.o 00:04:22.442 CC lib/nvme/nvme_cuse.o 00:04:22.442 CC lib/nvme/nvme_vfio_user.o 00:04:22.442 CC lib/nvme/nvme_rdma.o 00:04:22.701 LIB libspdk_thread.a 00:04:22.701 SO libspdk_thread.so.11.0 00:04:22.959 SYMLINK libspdk_thread.so 00:04:23.216 CC lib/vfu_tgt/tgt_endpoint.o 00:04:23.216 CC lib/vfu_tgt/tgt_rpc.o 00:04:23.216 CC lib/virtio/virtio.o 00:04:23.216 CC lib/virtio/virtio_vhost_user.o 00:04:23.216 CC lib/virtio/virtio_vfio_user.o 00:04:23.216 CC lib/virtio/virtio_pci.o 00:04:23.216 CC lib/accel/accel_rpc.o 00:04:23.216 CC lib/accel/accel.o 00:04:23.216 CC lib/fsdev/fsdev.o 00:04:23.216 CC lib/accel/accel_sw.o 00:04:23.216 CC lib/fsdev/fsdev_io.o 00:04:23.216 CC lib/fsdev/fsdev_rpc.o 00:04:23.216 CC lib/blob/blobstore.o 00:04:23.216 CC lib/init/json_config.o 00:04:23.216 CC lib/blob/request.o 00:04:23.216 CC lib/blob/zeroes.o 00:04:23.216 CC lib/init/subsystem.o 00:04:23.216 CC lib/blob/blob_bs_dev.o 00:04:23.216 CC lib/init/subsystem_rpc.o 00:04:23.216 CC lib/init/rpc.o 00:04:23.475 LIB libspdk_init.a 00:04:23.475 SO libspdk_init.so.6.0 00:04:23.475 LIB libspdk_virtio.a 00:04:23.475 LIB libspdk_vfu_tgt.a 00:04:23.475 SO libspdk_virtio.so.7.0 00:04:23.475 SYMLINK libspdk_init.so 00:04:23.475 SO libspdk_vfu_tgt.so.3.0 00:04:23.475 SYMLINK libspdk_virtio.so 00:04:23.475 SYMLINK libspdk_vfu_tgt.so 00:04:23.733 LIB libspdk_fsdev.a 00:04:23.733 SO libspdk_fsdev.so.2.0 00:04:23.733 CC lib/event/app.o 00:04:23.733 CC lib/event/reactor.o 00:04:23.733 CC lib/event/log_rpc.o 00:04:23.733 SYMLINK libspdk_fsdev.so 00:04:23.733 CC lib/event/app_rpc.o 00:04:23.733 CC lib/event/scheduler_static.o 00:04:23.991 LIB libspdk_accel.a 00:04:23.991 SO libspdk_accel.so.16.0 00:04:23.991 LIB libspdk_nvme.a 00:04:23.991 SYMLINK libspdk_accel.so 00:04:23.991 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.249 LIB libspdk_event.a 00:04:24.249 SO libspdk_nvme.so.15.0 00:04:24.249 SO libspdk_event.so.14.0 00:04:24.249 SYMLINK libspdk_event.so 00:04:24.249 SYMLINK libspdk_nvme.so 00:04:24.507 CC lib/bdev/bdev.o 00:04:24.507 CC lib/bdev/bdev_rpc.o 00:04:24.507 CC lib/bdev/bdev_zone.o 00:04:24.507 CC lib/bdev/part.o 00:04:24.507 CC lib/bdev/scsi_nvme.o 00:04:24.507 LIB libspdk_fuse_dispatcher.a 00:04:24.507 SO libspdk_fuse_dispatcher.so.1.0 00:04:24.765 SYMLINK libspdk_fuse_dispatcher.so 00:04:25.332 LIB libspdk_blob.a 00:04:25.332 SO libspdk_blob.so.12.0 00:04:25.332 SYMLINK libspdk_blob.so 00:04:25.898 CC lib/lvol/lvol.o 00:04:25.898 CC lib/blobfs/blobfs.o 00:04:25.898 CC lib/blobfs/tree.o 00:04:26.157 LIB libspdk_bdev.a 00:04:26.416 SO libspdk_bdev.so.17.0 00:04:26.416 LIB libspdk_blobfs.a 00:04:26.416 SO libspdk_blobfs.so.11.0 00:04:26.416 SYMLINK libspdk_bdev.so 00:04:26.416 LIB libspdk_lvol.a 00:04:26.416 SYMLINK libspdk_blobfs.so 00:04:26.416 SO libspdk_lvol.so.11.0 00:04:26.416 SYMLINK libspdk_lvol.so 00:04:26.675 CC lib/scsi/dev.o 00:04:26.675 CC lib/scsi/lun.o 00:04:26.675 CC lib/scsi/port.o 00:04:26.675 CC lib/scsi/scsi.o 00:04:26.675 CC lib/scsi/scsi_bdev.o 00:04:26.675 CC lib/scsi/scsi_pr.o 00:04:26.675 CC lib/scsi/scsi_rpc.o 00:04:26.675 CC lib/scsi/task.o 00:04:26.675 CC lib/ublk/ublk.o 00:04:26.675 CC lib/ublk/ublk_rpc.o 00:04:26.675 CC lib/nvmf/ctrlr.o 00:04:26.675 CC lib/ftl/ftl_core.o 00:04:26.675 CC lib/nbd/nbd.o 00:04:26.675 CC lib/nvmf/ctrlr_discovery.o 00:04:26.675 CC lib/nvmf/ctrlr_bdev.o 00:04:26.675 CC lib/ftl/ftl_init.o 00:04:26.675 CC lib/nbd/nbd_rpc.o 00:04:26.675 CC lib/ftl/ftl_layout.o 00:04:26.675 CC lib/nvmf/subsystem.o 00:04:26.675 CC lib/ftl/ftl_debug.o 00:04:26.675 CC lib/nvmf/nvmf.o 00:04:26.675 CC lib/ftl/ftl_io.o 00:04:26.675 CC lib/nvmf/nvmf_rpc.o 00:04:26.675 CC lib/ftl/ftl_sb.o 00:04:26.675 CC lib/nvmf/transport.o 00:04:26.675 CC lib/ftl/ftl_l2p.o 00:04:26.675 CC lib/nvmf/tcp.o 00:04:26.675 CC lib/ftl/ftl_l2p_flat.o 00:04:26.675 CC lib/nvmf/stubs.o 00:04:26.675 CC lib/nvmf/mdns_server.o 00:04:26.675 CC lib/ftl/ftl_nv_cache.o 00:04:26.675 CC lib/ftl/ftl_band.o 00:04:26.675 CC lib/ftl/ftl_band_ops.o 00:04:26.675 CC lib/nvmf/vfio_user.o 00:04:26.675 CC lib/nvmf/rdma.o 00:04:26.675 CC lib/ftl/ftl_writer.o 00:04:26.675 CC lib/nvmf/auth.o 00:04:26.675 CC lib/ftl/ftl_reloc.o 00:04:26.675 CC lib/ftl/ftl_rq.o 00:04:26.675 CC lib/ftl/ftl_l2p_cache.o 00:04:26.675 CC lib/ftl/ftl_p2l.o 00:04:26.675 CC lib/ftl/ftl_p2l_log.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.675 CC lib/ftl/utils/ftl_conf.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.675 CC lib/ftl/utils/ftl_md.o 00:04:26.675 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.675 CC lib/ftl/utils/ftl_bitmap.o 00:04:26.675 CC lib/ftl/utils/ftl_mempool.o 00:04:26.675 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:26.675 CC lib/ftl/utils/ftl_property.o 00:04:26.675 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.675 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.675 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.675 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.675 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.675 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:26.675 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.675 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.675 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.675 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.675 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:26.675 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:26.675 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.675 CC lib/ftl/base/ftl_base_dev.o 00:04:26.675 CC lib/ftl/ftl_trace.o 00:04:27.242 LIB libspdk_nbd.a 00:04:27.242 SO libspdk_nbd.so.7.0 00:04:27.501 SYMLINK libspdk_nbd.so 00:04:27.501 LIB libspdk_scsi.a 00:04:27.501 SO libspdk_scsi.so.9.0 00:04:27.501 LIB libspdk_ublk.a 00:04:27.501 SO libspdk_ublk.so.3.0 00:04:27.501 SYMLINK libspdk_scsi.so 00:04:27.501 SYMLINK libspdk_ublk.so 00:04:27.759 LIB libspdk_ftl.a 00:04:27.759 SO libspdk_ftl.so.9.0 00:04:27.759 CC lib/iscsi/conn.o 00:04:27.759 CC lib/iscsi/init_grp.o 00:04:27.759 CC lib/iscsi/iscsi.o 00:04:27.759 CC lib/iscsi/param.o 00:04:27.759 CC lib/iscsi/portal_grp.o 00:04:27.759 CC lib/vhost/vhost.o 00:04:27.759 CC lib/iscsi/tgt_node.o 00:04:27.759 CC lib/vhost/vhost_rpc.o 00:04:27.759 CC lib/iscsi/iscsi_subsystem.o 00:04:27.759 CC lib/vhost/vhost_scsi.o 00:04:27.759 CC lib/vhost/vhost_blk.o 00:04:27.759 CC lib/iscsi/iscsi_rpc.o 00:04:27.759 CC lib/iscsi/task.o 00:04:27.759 CC lib/vhost/rte_vhost_user.o 00:04:28.017 SYMLINK libspdk_ftl.so 00:04:28.585 LIB libspdk_nvmf.a 00:04:28.585 SO libspdk_nvmf.so.20.0 00:04:28.585 LIB libspdk_vhost.a 00:04:28.585 SO libspdk_vhost.so.8.0 00:04:28.844 SYMLINK libspdk_nvmf.so 00:04:28.844 SYMLINK libspdk_vhost.so 00:04:28.844 LIB libspdk_iscsi.a 00:04:28.844 SO libspdk_iscsi.so.8.0 00:04:29.103 SYMLINK libspdk_iscsi.so 00:04:29.671 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.671 CC module/vfu_device/vfu_virtio_blk.o 00:04:29.671 CC module/vfu_device/vfu_virtio.o 00:04:29.671 CC module/vfu_device/vfu_virtio_scsi.o 00:04:29.671 CC module/vfu_device/vfu_virtio_rpc.o 00:04:29.671 CC module/vfu_device/vfu_virtio_fs.o 00:04:29.671 LIB libspdk_env_dpdk_rpc.a 00:04:29.671 CC module/keyring/linux/keyring.o 00:04:29.671 CC module/keyring/file/keyring.o 00:04:29.671 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.671 CC module/sock/posix/posix.o 00:04:29.671 CC module/keyring/linux/keyring_rpc.o 00:04:29.671 CC module/keyring/file/keyring_rpc.o 00:04:29.671 CC module/blob/bdev/blob_bdev.o 00:04:29.671 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:29.671 CC module/accel/iaa/accel_iaa.o 00:04:29.671 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.671 CC module/accel/error/accel_error.o 00:04:29.671 CC module/accel/ioat/accel_ioat.o 00:04:29.671 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.671 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.671 CC module/accel/error/accel_error_rpc.o 00:04:29.671 CC module/fsdev/aio/fsdev_aio.o 00:04:29.671 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:29.671 CC module/fsdev/aio/linux_aio_mgr.o 00:04:29.671 CC module/accel/dsa/accel_dsa.o 00:04:29.671 CC module/accel/dsa/accel_dsa_rpc.o 00:04:29.671 SO libspdk_env_dpdk_rpc.so.6.0 00:04:29.671 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.929 LIB libspdk_keyring_file.a 00:04:29.929 LIB libspdk_keyring_linux.a 00:04:29.929 LIB libspdk_scheduler_dpdk_governor.a 00:04:29.929 LIB libspdk_scheduler_gscheduler.a 00:04:29.929 SO libspdk_keyring_file.so.2.0 00:04:29.929 LIB libspdk_accel_ioat.a 00:04:29.929 SO libspdk_keyring_linux.so.1.0 00:04:29.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:29.929 LIB libspdk_scheduler_dynamic.a 00:04:29.929 SO libspdk_scheduler_gscheduler.so.4.0 00:04:29.929 SO libspdk_accel_ioat.so.6.0 00:04:29.929 LIB libspdk_accel_iaa.a 00:04:29.929 LIB libspdk_accel_error.a 00:04:29.929 SO libspdk_scheduler_dynamic.so.4.0 00:04:29.929 SYMLINK libspdk_keyring_file.so 00:04:29.929 SO libspdk_accel_iaa.so.3.0 00:04:29.930 SYMLINK libspdk_keyring_linux.so 00:04:29.930 LIB libspdk_blob_bdev.a 00:04:29.930 SO libspdk_accel_error.so.2.0 00:04:29.930 SYMLINK libspdk_scheduler_gscheduler.so 00:04:29.930 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:29.930 SYMLINK libspdk_accel_ioat.so 00:04:29.930 LIB libspdk_accel_dsa.a 00:04:29.930 SO libspdk_blob_bdev.so.12.0 00:04:29.930 SYMLINK libspdk_scheduler_dynamic.so 00:04:29.930 SO libspdk_accel_dsa.so.5.0 00:04:29.930 SYMLINK libspdk_accel_iaa.so 00:04:29.930 SYMLINK libspdk_accel_error.so 00:04:29.930 SYMLINK libspdk_blob_bdev.so 00:04:29.930 LIB libspdk_vfu_device.a 00:04:30.188 SYMLINK libspdk_accel_dsa.so 00:04:30.188 SO libspdk_vfu_device.so.3.0 00:04:30.188 SYMLINK libspdk_vfu_device.so 00:04:30.188 LIB libspdk_fsdev_aio.a 00:04:30.188 LIB libspdk_sock_posix.a 00:04:30.188 SO libspdk_fsdev_aio.so.1.0 00:04:30.188 SO libspdk_sock_posix.so.6.0 00:04:30.446 SYMLINK libspdk_fsdev_aio.so 00:04:30.446 SYMLINK libspdk_sock_posix.so 00:04:30.447 CC module/bdev/gpt/gpt.o 00:04:30.447 CC module/bdev/gpt/vbdev_gpt.o 00:04:30.447 CC module/bdev/aio/bdev_aio.o 00:04:30.447 CC module/bdev/raid/bdev_raid_rpc.o 00:04:30.447 CC module/bdev/raid/bdev_raid.o 00:04:30.447 CC module/bdev/raid/bdev_raid_sb.o 00:04:30.447 CC module/bdev/aio/bdev_aio_rpc.o 00:04:30.447 CC module/bdev/null/bdev_null.o 00:04:30.447 CC module/bdev/raid/raid1.o 00:04:30.447 CC module/bdev/null/bdev_null_rpc.o 00:04:30.447 CC module/bdev/raid/raid0.o 00:04:30.447 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.447 CC module/bdev/malloc/bdev_malloc.o 00:04:30.447 CC module/bdev/raid/concat.o 00:04:30.447 CC module/bdev/nvme/bdev_nvme.o 00:04:30.447 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.447 CC module/bdev/delay/vbdev_delay.o 00:04:30.447 CC module/bdev/nvme/nvme_rpc.o 00:04:30.447 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.447 CC module/bdev/nvme/vbdev_opal.o 00:04:30.447 CC module/bdev/nvme/bdev_mdns_client.o 00:04:30.447 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:30.447 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.447 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.447 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:30.447 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.447 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.447 CC module/bdev/lvol/vbdev_lvol.o 00:04:30.447 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.447 CC module/blobfs/bdev/blobfs_bdev.o 00:04:30.447 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.447 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:30.447 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.447 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:30.447 CC module/bdev/split/vbdev_split.o 00:04:30.447 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:30.447 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:30.447 CC module/bdev/ftl/bdev_ftl.o 00:04:30.447 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.447 CC module/bdev/error/vbdev_error.o 00:04:30.447 CC module/bdev/split/vbdev_split_rpc.o 00:04:30.447 CC module/bdev/error/vbdev_error_rpc.o 00:04:30.704 LIB libspdk_blobfs_bdev.a 00:04:30.704 SO libspdk_blobfs_bdev.so.6.0 00:04:30.704 LIB libspdk_bdev_split.a 00:04:30.963 LIB libspdk_bdev_gpt.a 00:04:30.963 SO libspdk_bdev_split.so.6.0 00:04:30.963 SYMLINK libspdk_blobfs_bdev.so 00:04:30.963 SO libspdk_bdev_gpt.so.6.0 00:04:30.963 LIB libspdk_bdev_aio.a 00:04:30.963 LIB libspdk_bdev_null.a 00:04:30.963 LIB libspdk_bdev_error.a 00:04:30.963 SYMLINK libspdk_bdev_split.so 00:04:30.963 LIB libspdk_bdev_ftl.a 00:04:30.963 SO libspdk_bdev_error.so.6.0 00:04:30.963 SO libspdk_bdev_null.so.6.0 00:04:30.963 SO libspdk_bdev_aio.so.6.0 00:04:30.963 LIB libspdk_bdev_malloc.a 00:04:30.963 LIB libspdk_bdev_zone_block.a 00:04:30.963 SYMLINK libspdk_bdev_gpt.so 00:04:30.963 LIB libspdk_bdev_passthru.a 00:04:30.963 LIB libspdk_bdev_delay.a 00:04:30.963 SO libspdk_bdev_ftl.so.6.0 00:04:30.963 SO libspdk_bdev_malloc.so.6.0 00:04:30.963 LIB libspdk_bdev_iscsi.a 00:04:30.963 SO libspdk_bdev_passthru.so.6.0 00:04:30.963 SO libspdk_bdev_zone_block.so.6.0 00:04:30.963 SO libspdk_bdev_delay.so.6.0 00:04:30.963 SYMLINK libspdk_bdev_null.so 00:04:30.963 SYMLINK libspdk_bdev_error.so 00:04:30.963 SYMLINK libspdk_bdev_aio.so 00:04:30.963 SO libspdk_bdev_iscsi.so.6.0 00:04:30.963 SYMLINK libspdk_bdev_ftl.so 00:04:30.963 SYMLINK libspdk_bdev_malloc.so 00:04:30.963 SYMLINK libspdk_bdev_passthru.so 00:04:30.963 SYMLINK libspdk_bdev_zone_block.so 00:04:30.963 SYMLINK libspdk_bdev_delay.so 00:04:30.963 LIB libspdk_bdev_lvol.a 00:04:30.963 SYMLINK libspdk_bdev_iscsi.so 00:04:30.963 LIB libspdk_bdev_virtio.a 00:04:30.963 SO libspdk_bdev_lvol.so.6.0 00:04:31.222 SO libspdk_bdev_virtio.so.6.0 00:04:31.222 SYMLINK libspdk_bdev_lvol.so 00:04:31.222 SYMLINK libspdk_bdev_virtio.so 00:04:31.480 LIB libspdk_bdev_raid.a 00:04:31.480 SO libspdk_bdev_raid.so.6.0 00:04:31.480 SYMLINK libspdk_bdev_raid.so 00:04:32.416 LIB libspdk_bdev_nvme.a 00:04:32.416 SO libspdk_bdev_nvme.so.7.1 00:04:32.675 SYMLINK libspdk_bdev_nvme.so 00:04:33.242 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.242 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.242 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.242 CC module/event/subsystems/vmd/vmd.o 00:04:33.242 CC module/event/subsystems/keyring/keyring.o 00:04:33.242 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:33.242 CC module/event/subsystems/sock/sock.o 00:04:33.242 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.242 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.242 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.242 LIB libspdk_event_vmd.a 00:04:33.242 LIB libspdk_event_sock.a 00:04:33.501 LIB libspdk_event_keyring.a 00:04:33.501 LIB libspdk_event_iobuf.a 00:04:33.501 LIB libspdk_event_vfu_tgt.a 00:04:33.501 LIB libspdk_event_vhost_blk.a 00:04:33.501 LIB libspdk_event_fsdev.a 00:04:33.501 LIB libspdk_event_scheduler.a 00:04:33.501 SO libspdk_event_vmd.so.6.0 00:04:33.501 SO libspdk_event_sock.so.5.0 00:04:33.501 SO libspdk_event_keyring.so.1.0 00:04:33.501 SO libspdk_event_iobuf.so.3.0 00:04:33.501 SO libspdk_event_vhost_blk.so.3.0 00:04:33.501 SO libspdk_event_vfu_tgt.so.3.0 00:04:33.501 SO libspdk_event_fsdev.so.1.0 00:04:33.501 SO libspdk_event_scheduler.so.4.0 00:04:33.501 SYMLINK libspdk_event_vmd.so 00:04:33.501 SYMLINK libspdk_event_keyring.so 00:04:33.501 SYMLINK libspdk_event_sock.so 00:04:33.501 SYMLINK libspdk_event_vhost_blk.so 00:04:33.501 SYMLINK libspdk_event_iobuf.so 00:04:33.501 SYMLINK libspdk_event_fsdev.so 00:04:33.501 SYMLINK libspdk_event_vfu_tgt.so 00:04:33.501 SYMLINK libspdk_event_scheduler.so 00:04:33.759 CC module/event/subsystems/accel/accel.o 00:04:34.018 LIB libspdk_event_accel.a 00:04:34.018 SO libspdk_event_accel.so.6.0 00:04:34.018 SYMLINK libspdk_event_accel.so 00:04:34.276 CC module/event/subsystems/bdev/bdev.o 00:04:34.535 LIB libspdk_event_bdev.a 00:04:34.535 SO libspdk_event_bdev.so.6.0 00:04:34.535 SYMLINK libspdk_event_bdev.so 00:04:34.793 CC module/event/subsystems/nbd/nbd.o 00:04:34.793 CC module/event/subsystems/scsi/scsi.o 00:04:34.793 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:34.793 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:34.793 CC module/event/subsystems/ublk/ublk.o 00:04:35.051 LIB libspdk_event_nbd.a 00:04:35.051 LIB libspdk_event_ublk.a 00:04:35.051 LIB libspdk_event_scsi.a 00:04:35.051 SO libspdk_event_ublk.so.3.0 00:04:35.051 SO libspdk_event_nbd.so.6.0 00:04:35.051 SO libspdk_event_scsi.so.6.0 00:04:35.051 LIB libspdk_event_nvmf.a 00:04:35.051 SYMLINK libspdk_event_ublk.so 00:04:35.051 SYMLINK libspdk_event_nbd.so 00:04:35.051 SO libspdk_event_nvmf.so.6.0 00:04:35.051 SYMLINK libspdk_event_scsi.so 00:04:35.051 SYMLINK libspdk_event_nvmf.so 00:04:35.309 CC module/event/subsystems/iscsi/iscsi.o 00:04:35.567 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.567 LIB libspdk_event_vhost_scsi.a 00:04:35.567 LIB libspdk_event_iscsi.a 00:04:35.567 SO libspdk_event_iscsi.so.6.0 00:04:35.567 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.567 SYMLINK libspdk_event_iscsi.so 00:04:35.567 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.826 SO libspdk.so.6.0 00:04:35.826 SYMLINK libspdk.so 00:04:36.085 CC app/trace_record/trace_record.o 00:04:36.085 CC app/spdk_top/spdk_top.o 00:04:36.085 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.085 CXX app/trace/trace.o 00:04:36.354 CC app/spdk_nvme_identify/identify.o 00:04:36.354 TEST_HEADER include/spdk/accel.h 00:04:36.354 TEST_HEADER include/spdk/assert.h 00:04:36.354 CC app/spdk_nvme_perf/perf.o 00:04:36.354 CC test/rpc_client/rpc_client_test.o 00:04:36.354 TEST_HEADER include/spdk/accel_module.h 00:04:36.354 TEST_HEADER include/spdk/barrier.h 00:04:36.354 TEST_HEADER include/spdk/bdev_module.h 00:04:36.354 TEST_HEADER include/spdk/base64.h 00:04:36.354 TEST_HEADER include/spdk/bdev.h 00:04:36.354 TEST_HEADER include/spdk/bit_array.h 00:04:36.354 TEST_HEADER include/spdk/bdev_zone.h 00:04:36.354 TEST_HEADER include/spdk/bit_pool.h 00:04:36.354 CC app/spdk_lspci/spdk_lspci.o 00:04:36.354 TEST_HEADER include/spdk/blob_bdev.h 00:04:36.354 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:36.354 TEST_HEADER include/spdk/blob.h 00:04:36.354 TEST_HEADER include/spdk/blobfs.h 00:04:36.354 TEST_HEADER include/spdk/conf.h 00:04:36.354 TEST_HEADER include/spdk/config.h 00:04:36.354 TEST_HEADER include/spdk/cpuset.h 00:04:36.354 TEST_HEADER include/spdk/crc32.h 00:04:36.354 TEST_HEADER include/spdk/crc16.h 00:04:36.354 TEST_HEADER include/spdk/crc64.h 00:04:36.354 TEST_HEADER include/spdk/dif.h 00:04:36.354 TEST_HEADER include/spdk/dma.h 00:04:36.354 TEST_HEADER include/spdk/endian.h 00:04:36.354 TEST_HEADER include/spdk/env_dpdk.h 00:04:36.354 TEST_HEADER include/spdk/env.h 00:04:36.354 TEST_HEADER include/spdk/fd_group.h 00:04:36.354 TEST_HEADER include/spdk/event.h 00:04:36.354 TEST_HEADER include/spdk/fd.h 00:04:36.354 TEST_HEADER include/spdk/file.h 00:04:36.354 TEST_HEADER include/spdk/fsdev.h 00:04:36.354 TEST_HEADER include/spdk/fsdev_module.h 00:04:36.354 TEST_HEADER include/spdk/ftl.h 00:04:36.354 TEST_HEADER include/spdk/gpt_spec.h 00:04:36.354 TEST_HEADER include/spdk/hexlify.h 00:04:36.354 TEST_HEADER include/spdk/histogram_data.h 00:04:36.354 TEST_HEADER include/spdk/idxd_spec.h 00:04:36.354 TEST_HEADER include/spdk/idxd.h 00:04:36.354 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:36.354 TEST_HEADER include/spdk/init.h 00:04:36.354 TEST_HEADER include/spdk/ioat.h 00:04:36.354 TEST_HEADER include/spdk/ioat_spec.h 00:04:36.354 TEST_HEADER include/spdk/json.h 00:04:36.354 TEST_HEADER include/spdk/iscsi_spec.h 00:04:36.354 TEST_HEADER include/spdk/jsonrpc.h 00:04:36.354 TEST_HEADER include/spdk/keyring.h 00:04:36.354 TEST_HEADER include/spdk/keyring_module.h 00:04:36.354 TEST_HEADER include/spdk/likely.h 00:04:36.354 TEST_HEADER include/spdk/log.h 00:04:36.354 TEST_HEADER include/spdk/lvol.h 00:04:36.354 TEST_HEADER include/spdk/md5.h 00:04:36.354 TEST_HEADER include/spdk/memory.h 00:04:36.354 CC app/iscsi_tgt/iscsi_tgt.o 00:04:36.354 TEST_HEADER include/spdk/nbd.h 00:04:36.354 CC app/spdk_dd/spdk_dd.o 00:04:36.354 TEST_HEADER include/spdk/mmio.h 00:04:36.354 TEST_HEADER include/spdk/net.h 00:04:36.354 TEST_HEADER include/spdk/notify.h 00:04:36.354 CC app/nvmf_tgt/nvmf_main.o 00:04:36.354 TEST_HEADER include/spdk/nvme.h 00:04:36.354 TEST_HEADER include/spdk/nvme_intel.h 00:04:36.354 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:36.354 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:36.354 TEST_HEADER include/spdk/nvme_zns.h 00:04:36.354 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:36.354 TEST_HEADER include/spdk/nvmf.h 00:04:36.354 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:36.354 TEST_HEADER include/spdk/nvme_spec.h 00:04:36.354 TEST_HEADER include/spdk/opal.h 00:04:36.354 TEST_HEADER include/spdk/nvmf_spec.h 00:04:36.354 TEST_HEADER include/spdk/nvmf_transport.h 00:04:36.354 TEST_HEADER include/spdk/pci_ids.h 00:04:36.354 TEST_HEADER include/spdk/pipe.h 00:04:36.354 TEST_HEADER include/spdk/opal_spec.h 00:04:36.354 TEST_HEADER include/spdk/queue.h 00:04:36.354 TEST_HEADER include/spdk/reduce.h 00:04:36.354 TEST_HEADER include/spdk/scheduler.h 00:04:36.354 TEST_HEADER include/spdk/rpc.h 00:04:36.354 TEST_HEADER include/spdk/scsi.h 00:04:36.354 TEST_HEADER include/spdk/scsi_spec.h 00:04:36.354 TEST_HEADER include/spdk/stdinc.h 00:04:36.354 TEST_HEADER include/spdk/string.h 00:04:36.354 TEST_HEADER include/spdk/sock.h 00:04:36.354 TEST_HEADER include/spdk/thread.h 00:04:36.354 TEST_HEADER include/spdk/trace.h 00:04:36.354 TEST_HEADER include/spdk/trace_parser.h 00:04:36.354 TEST_HEADER include/spdk/ublk.h 00:04:36.354 TEST_HEADER include/spdk/util.h 00:04:36.354 TEST_HEADER include/spdk/tree.h 00:04:36.354 TEST_HEADER include/spdk/uuid.h 00:04:36.354 TEST_HEADER include/spdk/version.h 00:04:36.354 CC app/spdk_tgt/spdk_tgt.o 00:04:36.354 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:36.354 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:36.354 TEST_HEADER include/spdk/vmd.h 00:04:36.354 TEST_HEADER include/spdk/vhost.h 00:04:36.354 TEST_HEADER include/spdk/zipf.h 00:04:36.354 TEST_HEADER include/spdk/xor.h 00:04:36.354 CXX test/cpp_headers/accel.o 00:04:36.354 CXX test/cpp_headers/accel_module.o 00:04:36.354 CXX test/cpp_headers/barrier.o 00:04:36.354 CXX test/cpp_headers/assert.o 00:04:36.354 CXX test/cpp_headers/base64.o 00:04:36.354 CXX test/cpp_headers/bdev_module.o 00:04:36.354 CXX test/cpp_headers/bdev_zone.o 00:04:36.354 CXX test/cpp_headers/bdev.o 00:04:36.354 CXX test/cpp_headers/bit_pool.o 00:04:36.354 CXX test/cpp_headers/bit_array.o 00:04:36.354 CXX test/cpp_headers/blobfs.o 00:04:36.354 CXX test/cpp_headers/blob_bdev.o 00:04:36.354 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.354 CXX test/cpp_headers/conf.o 00:04:36.354 CXX test/cpp_headers/blob.o 00:04:36.354 CXX test/cpp_headers/config.o 00:04:36.354 CXX test/cpp_headers/cpuset.o 00:04:36.354 CXX test/cpp_headers/crc16.o 00:04:36.354 CXX test/cpp_headers/crc32.o 00:04:36.354 CXX test/cpp_headers/crc64.o 00:04:36.354 CXX test/cpp_headers/dif.o 00:04:36.354 CXX test/cpp_headers/dma.o 00:04:36.354 CXX test/cpp_headers/endian.o 00:04:36.354 CXX test/cpp_headers/env_dpdk.o 00:04:36.354 CXX test/cpp_headers/env.o 00:04:36.354 CXX test/cpp_headers/event.o 00:04:36.354 CXX test/cpp_headers/fd.o 00:04:36.354 CXX test/cpp_headers/fd_group.o 00:04:36.354 CXX test/cpp_headers/fsdev_module.o 00:04:36.354 CXX test/cpp_headers/file.o 00:04:36.354 CXX test/cpp_headers/ftl.o 00:04:36.354 CXX test/cpp_headers/fsdev.o 00:04:36.354 CXX test/cpp_headers/gpt_spec.o 00:04:36.354 CXX test/cpp_headers/histogram_data.o 00:04:36.354 CXX test/cpp_headers/hexlify.o 00:04:36.354 CXX test/cpp_headers/idxd_spec.o 00:04:36.354 CXX test/cpp_headers/idxd.o 00:04:36.354 CXX test/cpp_headers/init.o 00:04:36.354 CXX test/cpp_headers/ioat.o 00:04:36.354 CXX test/cpp_headers/iscsi_spec.o 00:04:36.354 CXX test/cpp_headers/ioat_spec.o 00:04:36.354 CXX test/cpp_headers/json.o 00:04:36.354 CXX test/cpp_headers/jsonrpc.o 00:04:36.354 CXX test/cpp_headers/keyring.o 00:04:36.354 CXX test/cpp_headers/keyring_module.o 00:04:36.354 CXX test/cpp_headers/likely.o 00:04:36.354 CXX test/cpp_headers/log.o 00:04:36.354 CXX test/cpp_headers/lvol.o 00:04:36.354 CXX test/cpp_headers/md5.o 00:04:36.354 CXX test/cpp_headers/memory.o 00:04:36.354 CXX test/cpp_headers/mmio.o 00:04:36.354 CXX test/cpp_headers/notify.o 00:04:36.354 CXX test/cpp_headers/net.o 00:04:36.354 CXX test/cpp_headers/nbd.o 00:04:36.354 CXX test/cpp_headers/nvme.o 00:04:36.354 CXX test/cpp_headers/nvme_intel.o 00:04:36.354 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.354 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.354 CXX test/cpp_headers/nvme_zns.o 00:04:36.354 CXX test/cpp_headers/nvme_spec.o 00:04:36.354 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.354 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.354 CXX test/cpp_headers/nvmf.o 00:04:36.354 CXX test/cpp_headers/nvmf_transport.o 00:04:36.354 CXX test/cpp_headers/nvmf_spec.o 00:04:36.354 CXX test/cpp_headers/opal.o 00:04:36.354 CXX test/cpp_headers/opal_spec.o 00:04:36.354 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:36.355 CC test/env/vtophys/vtophys.o 00:04:36.355 CC examples/ioat/perf/perf.o 00:04:36.355 CC examples/ioat/verify/verify.o 00:04:36.355 CC examples/util/zipf/zipf.o 00:04:36.355 CC test/env/pci/pci_ut.o 00:04:36.355 CC test/env/memory/memory_ut.o 00:04:36.355 CC test/app/histogram_perf/histogram_perf.o 00:04:36.355 CC test/thread/poller_perf/poller_perf.o 00:04:36.355 CC test/app/jsoncat/jsoncat.o 00:04:36.355 CC app/fio/nvme/fio_plugin.o 00:04:36.619 CC test/app/bdev_svc/bdev_svc.o 00:04:36.619 CC test/dma/test_dma/test_dma.o 00:04:36.619 CC test/app/stub/stub.o 00:04:36.619 CC app/fio/bdev/fio_plugin.o 00:04:36.619 LINK spdk_lspci 00:04:36.619 LINK rpc_client_test 00:04:36.619 LINK spdk_trace_record 00:04:36.883 LINK interrupt_tgt 00:04:36.883 LINK spdk_nvme_discover 00:04:36.883 CC test/env/mem_callbacks/mem_callbacks.o 00:04:36.883 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.883 LINK poller_perf 00:04:36.883 LINK nvmf_tgt 00:04:36.883 LINK zipf 00:04:36.883 LINK histogram_perf 00:04:36.883 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:36.883 LINK env_dpdk_post_init 00:04:36.883 LINK vtophys 00:04:36.883 CXX test/cpp_headers/pci_ids.o 00:04:37.144 LINK iscsi_tgt 00:04:37.144 CXX test/cpp_headers/pipe.o 00:04:37.144 CXX test/cpp_headers/queue.o 00:04:37.144 LINK bdev_svc 00:04:37.144 CXX test/cpp_headers/reduce.o 00:04:37.144 CXX test/cpp_headers/rpc.o 00:04:37.144 CXX test/cpp_headers/scsi.o 00:04:37.144 CXX test/cpp_headers/scheduler.o 00:04:37.144 CXX test/cpp_headers/scsi_spec.o 00:04:37.144 CXX test/cpp_headers/sock.o 00:04:37.144 CXX test/cpp_headers/stdinc.o 00:04:37.144 CXX test/cpp_headers/string.o 00:04:37.144 LINK verify 00:04:37.144 CXX test/cpp_headers/thread.o 00:04:37.144 LINK ioat_perf 00:04:37.144 CXX test/cpp_headers/trace_parser.o 00:04:37.144 CXX test/cpp_headers/trace.o 00:04:37.144 CXX test/cpp_headers/tree.o 00:04:37.144 CXX test/cpp_headers/ublk.o 00:04:37.144 CXX test/cpp_headers/util.o 00:04:37.144 CXX test/cpp_headers/uuid.o 00:04:37.144 LINK jsoncat 00:04:37.144 CXX test/cpp_headers/version.o 00:04:37.144 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.144 CXX test/cpp_headers/vfio_user_spec.o 00:04:37.144 CXX test/cpp_headers/vhost.o 00:04:37.144 CXX test/cpp_headers/vmd.o 00:04:37.144 LINK spdk_tgt 00:04:37.144 CXX test/cpp_headers/xor.o 00:04:37.144 CXX test/cpp_headers/zipf.o 00:04:37.144 LINK stub 00:04:37.144 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:37.144 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:37.144 LINK spdk_trace 00:04:37.144 LINK pci_ut 00:04:37.401 LINK spdk_dd 00:04:37.401 LINK spdk_bdev 00:04:37.401 LINK spdk_nvme 00:04:37.401 CC test/event/event_perf/event_perf.o 00:04:37.401 CC test/event/reactor/reactor.o 00:04:37.401 CC test/event/reactor_perf/reactor_perf.o 00:04:37.401 CC test/event/app_repeat/app_repeat.o 00:04:37.401 CC test/event/scheduler/scheduler.o 00:04:37.659 LINK nvme_fuzz 00:04:37.659 CC examples/sock/hello_world/hello_sock.o 00:04:37.659 CC examples/idxd/perf/perf.o 00:04:37.659 CC examples/vmd/lsvmd/lsvmd.o 00:04:37.659 CC examples/vmd/led/led.o 00:04:37.659 LINK test_dma 00:04:37.659 LINK spdk_nvme_perf 00:04:37.659 CC examples/thread/thread/thread_ex.o 00:04:37.659 LINK event_perf 00:04:37.659 LINK reactor 00:04:37.659 LINK reactor_perf 00:04:37.659 CC app/vhost/vhost.o 00:04:37.659 LINK vhost_fuzz 00:04:37.659 LINK app_repeat 00:04:37.659 LINK spdk_top 00:04:37.659 LINK lsvmd 00:04:37.659 LINK spdk_nvme_identify 00:04:37.659 LINK led 00:04:37.659 LINK mem_callbacks 00:04:37.659 LINK scheduler 00:04:37.918 LINK hello_sock 00:04:37.918 LINK vhost 00:04:37.918 LINK thread 00:04:37.918 LINK idxd_perf 00:04:38.177 LINK memory_ut 00:04:38.177 CC test/nvme/reserve/reserve.o 00:04:38.177 CC test/nvme/aer/aer.o 00:04:38.177 CC test/nvme/boot_partition/boot_partition.o 00:04:38.177 CC test/nvme/overhead/overhead.o 00:04:38.177 CC test/nvme/connect_stress/connect_stress.o 00:04:38.177 CC test/nvme/sgl/sgl.o 00:04:38.177 CC test/nvme/startup/startup.o 00:04:38.177 CC test/nvme/err_injection/err_injection.o 00:04:38.177 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:38.177 CC test/nvme/compliance/nvme_compliance.o 00:04:38.177 CC test/blobfs/mkfs/mkfs.o 00:04:38.177 CC test/nvme/simple_copy/simple_copy.o 00:04:38.177 CC test/nvme/e2edp/nvme_dp.o 00:04:38.177 CC test/nvme/fdp/fdp.o 00:04:38.177 CC test/nvme/reset/reset.o 00:04:38.177 CC test/nvme/cuse/cuse.o 00:04:38.177 CC test/nvme/fused_ordering/fused_ordering.o 00:04:38.177 CC test/accel/dif/dif.o 00:04:38.177 CC test/lvol/esnap/esnap.o 00:04:38.177 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:38.177 CC examples/nvme/reconnect/reconnect.o 00:04:38.177 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:38.177 CC examples/nvme/hello_world/hello_world.o 00:04:38.177 CC examples/nvme/hotplug/hotplug.o 00:04:38.177 CC examples/nvme/abort/abort.o 00:04:38.177 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.177 CC examples/nvme/arbitration/arbitration.o 00:04:38.177 LINK boot_partition 00:04:38.436 LINK startup 00:04:38.436 LINK doorbell_aers 00:04:38.436 LINK connect_stress 00:04:38.436 LINK err_injection 00:04:38.436 LINK reserve 00:04:38.436 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:38.436 CC examples/accel/perf/accel_perf.o 00:04:38.436 LINK reset 00:04:38.436 LINK overhead 00:04:38.436 CC examples/blob/hello_world/hello_blob.o 00:04:38.436 LINK fused_ordering 00:04:38.436 LINK simple_copy 00:04:38.436 CC examples/blob/cli/blobcli.o 00:04:38.436 LINK mkfs 00:04:38.436 LINK aer 00:04:38.436 LINK nvme_dp 00:04:38.436 LINK sgl 00:04:38.436 LINK cmb_copy 00:04:38.436 LINK fdp 00:04:38.436 LINK nvme_compliance 00:04:38.436 LINK pmr_persistence 00:04:38.436 LINK hello_world 00:04:38.436 LINK hotplug 00:04:38.694 LINK iscsi_fuzz 00:04:38.694 LINK reconnect 00:04:38.694 LINK arbitration 00:04:38.694 LINK abort 00:04:38.694 LINK hello_blob 00:04:38.694 LINK hello_fsdev 00:04:38.694 LINK nvme_manage 00:04:38.694 LINK dif 00:04:38.694 LINK accel_perf 00:04:38.953 LINK blobcli 00:04:39.211 LINK cuse 00:04:39.211 CC test/bdev/bdevio/bdevio.o 00:04:39.211 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.211 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.469 LINK hello_bdev 00:04:39.469 LINK bdevio 00:04:40.037 LINK bdevperf 00:04:40.296 CC examples/nvmf/nvmf/nvmf.o 00:04:40.863 LINK nvmf 00:04:41.798 LINK esnap 00:04:42.056 00:04:42.056 real 0m55.674s 00:04:42.056 user 8m20.383s 00:04:42.056 sys 3m54.357s 00:04:42.056 05:29:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:42.056 05:29:59 make -- common/autotest_common.sh@10 -- $ set +x 00:04:42.056 ************************************ 00:04:42.056 END TEST make 00:04:42.056 ************************************ 00:04:42.056 05:29:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:42.056 05:29:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:42.056 05:29:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:42.056 05:29:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.056 05:29:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:42.056 05:29:59 -- pm/common@44 -- $ pid=4041271 00:04:42.056 05:29:59 -- pm/common@50 -- $ kill -TERM 4041271 00:04:42.056 05:29:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.056 05:29:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:42.056 05:29:59 -- pm/common@44 -- $ pid=4041272 00:04:42.056 05:29:59 -- pm/common@50 -- $ kill -TERM 4041272 00:04:42.056 05:29:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.056 05:29:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:42.056 05:29:59 -- pm/common@44 -- $ pid=4041275 00:04:42.056 05:29:59 -- pm/common@50 -- $ kill -TERM 4041275 00:04:42.056 05:29:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.056 05:29:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:42.056 05:29:59 -- pm/common@44 -- $ pid=4041298 00:04:42.056 05:29:59 -- pm/common@50 -- $ sudo -E kill -TERM 4041298 00:04:42.056 05:29:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:42.057 05:29:59 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:42.315 05:30:00 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.315 05:30:00 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.315 05:30:00 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.315 05:30:00 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.315 05:30:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.315 05:30:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.315 05:30:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.315 05:30:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.315 05:30:00 -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.315 05:30:00 -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.315 05:30:00 -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.315 05:30:00 -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.315 05:30:00 -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.315 05:30:00 -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.315 05:30:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.315 05:30:00 -- scripts/common.sh@344 -- # case "$op" in 00:04:42.315 05:30:00 -- scripts/common.sh@345 -- # : 1 00:04:42.315 05:30:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.315 05:30:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.315 05:30:00 -- scripts/common.sh@365 -- # decimal 1 00:04:42.315 05:30:00 -- scripts/common.sh@353 -- # local d=1 00:04:42.315 05:30:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.315 05:30:00 -- scripts/common.sh@355 -- # echo 1 00:04:42.315 05:30:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.315 05:30:00 -- scripts/common.sh@366 -- # decimal 2 00:04:42.315 05:30:00 -- scripts/common.sh@353 -- # local d=2 00:04:42.315 05:30:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.315 05:30:00 -- scripts/common.sh@355 -- # echo 2 00:04:42.315 05:30:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.315 05:30:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.315 05:30:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.315 05:30:00 -- scripts/common.sh@368 -- # return 0 00:04:42.315 05:30:00 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.315 05:30:00 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.315 --rc genhtml_branch_coverage=1 00:04:42.315 --rc genhtml_function_coverage=1 00:04:42.315 --rc genhtml_legend=1 00:04:42.315 --rc geninfo_all_blocks=1 00:04:42.315 --rc geninfo_unexecuted_blocks=1 00:04:42.315 00:04:42.315 ' 00:04:42.315 05:30:00 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.315 --rc genhtml_branch_coverage=1 00:04:42.315 --rc genhtml_function_coverage=1 00:04:42.315 --rc genhtml_legend=1 00:04:42.315 --rc geninfo_all_blocks=1 00:04:42.315 --rc geninfo_unexecuted_blocks=1 00:04:42.315 00:04:42.315 ' 00:04:42.315 05:30:00 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.315 --rc genhtml_branch_coverage=1 00:04:42.315 --rc genhtml_function_coverage=1 00:04:42.315 --rc genhtml_legend=1 00:04:42.315 --rc geninfo_all_blocks=1 00:04:42.315 --rc geninfo_unexecuted_blocks=1 00:04:42.315 00:04:42.315 ' 00:04:42.315 05:30:00 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.315 --rc genhtml_branch_coverage=1 00:04:42.315 --rc genhtml_function_coverage=1 00:04:42.315 --rc genhtml_legend=1 00:04:42.315 --rc geninfo_all_blocks=1 00:04:42.315 --rc geninfo_unexecuted_blocks=1 00:04:42.315 00:04:42.315 ' 00:04:42.315 05:30:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.315 05:30:00 -- nvmf/common.sh@7 -- # uname -s 00:04:42.315 05:30:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.315 05:30:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.315 05:30:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.315 05:30:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.315 05:30:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.315 05:30:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.315 05:30:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.315 05:30:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.315 05:30:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.315 05:30:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.315 05:30:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:42.315 05:30:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:42.315 05:30:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.315 05:30:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.315 05:30:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:42.315 05:30:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.315 05:30:00 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.315 05:30:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.315 05:30:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.315 05:30:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.315 05:30:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.315 05:30:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.315 05:30:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.315 05:30:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.315 05:30:00 -- paths/export.sh@5 -- # export PATH 00:04:42.315 05:30:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.315 05:30:00 -- nvmf/common.sh@51 -- # : 0 00:04:42.315 05:30:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.315 05:30:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.315 05:30:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.315 05:30:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.315 05:30:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.315 05:30:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.315 05:30:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.315 05:30:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.315 05:30:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.315 05:30:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:42.315 05:30:00 -- spdk/autotest.sh@32 -- # uname -s 00:04:42.315 05:30:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:42.315 05:30:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:42.315 05:30:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:42.315 05:30:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:42.315 05:30:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:42.315 05:30:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:42.315 05:30:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:42.315 05:30:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:42.315 05:30:00 -- spdk/autotest.sh@48 -- # udevadm_pid=4103588 00:04:42.315 05:30:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:42.315 05:30:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:42.315 05:30:00 -- pm/common@17 -- # local monitor 00:04:42.315 05:30:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.315 05:30:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.315 05:30:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.315 05:30:00 -- pm/common@21 -- # date +%s 00:04:42.315 05:30:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.315 05:30:00 -- pm/common@21 -- # date +%s 00:04:42.315 05:30:00 -- pm/common@25 -- # sleep 1 00:04:42.315 05:30:00 -- pm/common@21 -- # date +%s 00:04:42.315 05:30:00 -- pm/common@21 -- # date +%s 00:04:42.315 05:30:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733805000 00:04:42.315 05:30:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733805000 00:04:42.315 05:30:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733805000 00:04:42.315 05:30:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733805000 00:04:42.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733805000_collect-cpu-load.pm.log 00:04:42.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733805000_collect-vmstat.pm.log 00:04:42.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733805000_collect-cpu-temp.pm.log 00:04:42.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733805000_collect-bmc-pm.bmc.pm.log 00:04:43.250 05:30:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.250 05:30:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:43.250 05:30:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.250 05:30:01 -- common/autotest_common.sh@10 -- # set +x 00:04:43.250 05:30:01 -- spdk/autotest.sh@59 -- # create_test_list 00:04:43.250 05:30:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:43.250 05:30:01 -- common/autotest_common.sh@10 -- # set +x 00:04:43.508 05:30:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:43.508 05:30:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.508 05:30:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.508 05:30:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:43.509 05:30:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.509 05:30:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.509 05:30:01 -- common/autotest_common.sh@1457 -- # uname 00:04:43.509 05:30:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:43.509 05:30:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.509 05:30:01 -- common/autotest_common.sh@1477 -- # uname 00:04:43.509 05:30:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:43.509 05:30:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.509 05:30:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.509 lcov: LCOV version 1.15 00:04:43.509 05:30:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:55.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:55.709 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:10.584 05:30:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:10.584 05:30:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.584 05:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:10.584 05:30:26 -- spdk/autotest.sh@78 -- # rm -f 00:05:10.584 05:30:26 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.962 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:05:11.962 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:11.962 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:11.962 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:12.221 05:30:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:12.221 05:30:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:12.221 05:30:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:12.221 05:30:29 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:12.221 05:30:29 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:12.221 05:30:29 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:12.221 05:30:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:05:12.221 05:30:29 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:12.221 05:30:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:12.221 05:30:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.221 05:30:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.221 05:30:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1669 -- # bdf=0000:5f:00.0 00:05:12.221 05:30:29 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:12.221 05:30:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:12.221 05:30:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.221 05:30:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.221 05:30:29 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:12.221 05:30:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:12.221 05:30:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:12.221 05:30:29 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:05:12.221 05:30:29 -- common/autotest_common.sh@1672 -- # zoned_ctrls["$nvme"]=0000:5f:00.0 00:05:12.221 05:30:29 -- common/autotest_common.sh@1673 -- # continue 2 00:05:12.221 05:30:29 -- common/autotest_common.sh@1678 -- # for nvme in "${!zoned_ctrls[@]}" 00:05:12.221 05:30:29 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:05:12.221 05:30:29 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:05:12.221 05:30:29 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:05:12.221 05:30:29 -- spdk/autotest.sh@85 -- # (( 2 > 0 )) 00:05:12.221 05:30:29 -- spdk/autotest.sh@90 -- # export 'PCI_BLOCKED=0000:5f:00.0 0000:5f:00.0' 00:05:12.221 05:30:29 -- spdk/autotest.sh@90 -- # PCI_BLOCKED='0000:5f:00.0 0000:5f:00.0' 00:05:12.221 05:30:29 -- spdk/autotest.sh@91 -- # export 'PCI_ZONED=0000:5f:00.0 0000:5f:00.0' 00:05:12.221 05:30:29 -- spdk/autotest.sh@91 -- # PCI_ZONED='0000:5f:00.0 0000:5f:00.0' 00:05:12.221 05:30:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.221 05:30:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.221 05:30:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:12.221 05:30:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:12.221 05:30:29 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:12.221 No valid GPT data, bailing 00:05:12.221 05:30:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.221 05:30:30 -- scripts/common.sh@394 -- # pt= 00:05:12.221 05:30:30 -- scripts/common.sh@395 -- # return 1 00:05:12.221 05:30:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:12.221 1+0 records in 00:05:12.221 1+0 records out 00:05:12.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537706 s, 195 MB/s 00:05:12.221 05:30:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.221 05:30:30 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:05:12.221 05:30:30 -- spdk/autotest.sh@99 -- # continue 00:05:12.221 05:30:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.221 05:30:30 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:05:12.221 05:30:30 -- spdk/autotest.sh@99 -- # continue 00:05:12.221 05:30:30 -- spdk/autotest.sh@105 -- # sync 00:05:12.221 05:30:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:12.221 05:30:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:12.221 05:30:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:17.494 05:30:35 -- spdk/autotest.sh@111 -- # uname -s 00:05:17.494 05:30:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:17.494 05:30:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:17.494 05:30:35 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:20.785 Hugepages 00:05:20.785 node hugesize free / total 00:05:20.785 node0 1048576kB 0 / 0 00:05:20.785 node0 2048kB 0 / 0 00:05:20.785 node1 1048576kB 0 / 0 00:05:20.785 node1 2048kB 0 / 0 00:05:20.785 00:05:20.785 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.785 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:20.785 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:21.048 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:21.048 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:05:21.048 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:21.048 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:21.048 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:21.049 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:21.049 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:21.049 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:21.049 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:21.049 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:21.049 05:30:38 -- spdk/autotest.sh@117 -- # uname -s 00:05:21.049 05:30:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:21.049 05:30:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:21.049 05:30:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:24.490 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:24.490 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:24.490 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:24.749 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:25.317 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:25.576 05:30:43 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:26.513 05:30:44 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:26.513 05:30:44 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:26.513 05:30:44 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.513 05:30:44 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:26.513 05:30:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.513 05:30:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.513 05:30:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.513 05:30:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.513 05:30:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:26.772 05:30:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:26.772 05:30:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:26.772 05:30:44 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.074 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:30.074 Waiting for block devices as requested 00:05:30.074 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:30.074 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:30.074 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:30.334 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:30.334 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:30.334 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:30.593 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:30.593 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:30.593 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:30.593 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:30.852 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:30.852 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:30.852 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:31.111 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:31.111 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:31.111 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:31.370 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:31.370 05:30:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:31.370 05:30:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:31.370 05:30:49 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:31.370 05:30:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:31.370 05:30:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:31.370 05:30:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:31.370 05:30:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:31.370 05:30:49 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:31.370 05:30:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:31.370 05:30:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:31.370 05:30:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:31.370 05:30:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:31.370 05:30:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.370 05:30:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:31.370 05:30:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:31.370 05:30:49 -- common/autotest_common.sh@1543 -- # continue 00:05:31.370 05:30:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:31.370 05:30:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.370 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:31.370 05:30:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:31.370 05:30:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.370 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:31.370 05:30:49 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:34.660 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:34.660 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:34.660 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:34.660 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:34.660 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:34.660 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:34.919 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:35.856 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:35.856 05:30:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:35.856 05:30:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.856 05:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:35.856 05:30:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:35.856 05:30:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:35.856 05:30:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:35.856 05:30:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:35.856 05:30:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:35.856 05:30:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:35.856 05:30:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:35.856 05:30:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:35.856 05:30:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:35.856 05:30:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:35.856 05:30:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.856 05:30:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:35.856 05:30:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:35.856 05:30:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:35.856 05:30:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:35.856 05:30:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:35.856 05:30:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:35.856 05:30:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:35.856 05:30:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:35.856 05:30:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:35.856 05:30:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:35.856 05:30:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:35.856 05:30:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:35.856 05:30:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=4119464 00:05:35.856 05:30:53 -- common/autotest_common.sh@1585 -- # waitforlisten 4119464 00:05:35.856 05:30:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.856 05:30:53 -- common/autotest_common.sh@835 -- # '[' -z 4119464 ']' 00:05:35.856 05:30:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.856 05:30:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.856 05:30:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.856 05:30:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.856 05:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:36.115 [2024-12-10 05:30:53.832767] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:05:36.115 [2024-12-10 05:30:53.832822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119464 ] 00:05:36.115 [2024-12-10 05:30:53.913815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.115 [2024-12-10 05:30:53.953772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.374 05:30:54 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.374 05:30:54 -- common/autotest_common.sh@868 -- # return 0 00:05:36.374 05:30:54 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:36.374 05:30:54 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:36.374 05:30:54 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:39.665 nvme0n1 00:05:39.665 05:30:57 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:39.665 [2024-12-10 05:30:57.353967] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:39.665 [2024-12-10 05:30:57.353995] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:39.665 request: 00:05:39.665 { 00:05:39.665 "nvme_ctrlr_name": "nvme0", 00:05:39.665 "password": "test", 00:05:39.665 "method": "bdev_nvme_opal_revert", 00:05:39.665 "req_id": 1 00:05:39.665 } 00:05:39.665 Got JSON-RPC error response 00:05:39.665 response: 00:05:39.665 { 00:05:39.665 "code": -32603, 00:05:39.665 "message": "Internal error" 00:05:39.665 } 00:05:39.665 05:30:57 -- common/autotest_common.sh@1591 -- # true 00:05:39.665 05:30:57 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:39.665 05:30:57 -- common/autotest_common.sh@1595 -- # killprocess 4119464 00:05:39.665 05:30:57 -- common/autotest_common.sh@954 -- # '[' -z 4119464 ']' 00:05:39.665 05:30:57 -- common/autotest_common.sh@958 -- # kill -0 4119464 00:05:39.665 05:30:57 -- common/autotest_common.sh@959 -- # uname 00:05:39.665 05:30:57 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.665 05:30:57 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119464 00:05:39.665 05:30:57 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.665 05:30:57 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.665 05:30:57 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119464' 00:05:39.665 killing process with pid 4119464 00:05:39.665 05:30:57 -- common/autotest_common.sh@973 -- # kill 4119464 00:05:39.665 05:30:57 -- common/autotest_common.sh@978 -- # wait 4119464 00:05:41.570 05:30:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:41.570 05:30:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:41.570 05:30:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.570 05:30:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.570 05:30:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:41.570 05:30:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.570 05:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.570 05:30:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:41.570 05:30:59 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:41.570 05:30:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.570 05:30:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.570 05:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.570 ************************************ 00:05:41.570 START TEST env 00:05:41.570 ************************************ 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:41.570 * Looking for test storage... 00:05:41.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.570 05:30:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.570 05:30:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.570 05:30:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.570 05:30:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.570 05:30:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.570 05:30:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.570 05:30:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.570 05:30:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.570 05:30:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.570 05:30:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.570 05:30:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.570 05:30:59 env -- scripts/common.sh@344 -- # case "$op" in 00:05:41.570 05:30:59 env -- scripts/common.sh@345 -- # : 1 00:05:41.570 05:30:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.570 05:30:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.570 05:30:59 env -- scripts/common.sh@365 -- # decimal 1 00:05:41.570 05:30:59 env -- scripts/common.sh@353 -- # local d=1 00:05:41.570 05:30:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.570 05:30:59 env -- scripts/common.sh@355 -- # echo 1 00:05:41.570 05:30:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.570 05:30:59 env -- scripts/common.sh@366 -- # decimal 2 00:05:41.570 05:30:59 env -- scripts/common.sh@353 -- # local d=2 00:05:41.570 05:30:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.570 05:30:59 env -- scripts/common.sh@355 -- # echo 2 00:05:41.570 05:30:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.570 05:30:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.570 05:30:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.570 05:30:59 env -- scripts/common.sh@368 -- # return 0 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.570 --rc genhtml_branch_coverage=1 00:05:41.570 --rc genhtml_function_coverage=1 00:05:41.570 --rc genhtml_legend=1 00:05:41.570 --rc geninfo_all_blocks=1 00:05:41.570 --rc geninfo_unexecuted_blocks=1 00:05:41.570 00:05:41.570 ' 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.570 --rc genhtml_branch_coverage=1 00:05:41.570 --rc genhtml_function_coverage=1 00:05:41.570 --rc genhtml_legend=1 00:05:41.570 --rc geninfo_all_blocks=1 00:05:41.570 --rc geninfo_unexecuted_blocks=1 00:05:41.570 00:05:41.570 ' 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.570 --rc genhtml_branch_coverage=1 00:05:41.570 --rc genhtml_function_coverage=1 00:05:41.570 --rc genhtml_legend=1 00:05:41.570 --rc geninfo_all_blocks=1 00:05:41.570 --rc geninfo_unexecuted_blocks=1 00:05:41.570 00:05:41.570 ' 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.570 --rc genhtml_branch_coverage=1 00:05:41.570 --rc genhtml_function_coverage=1 00:05:41.570 --rc genhtml_legend=1 00:05:41.570 --rc geninfo_all_blocks=1 00:05:41.570 --rc geninfo_unexecuted_blocks=1 00:05:41.570 00:05:41.570 ' 00:05:41.570 05:30:59 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.570 05:30:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.570 05:30:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.570 ************************************ 00:05:41.570 START TEST env_memory 00:05:41.570 ************************************ 00:05:41.570 05:30:59 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:41.570 00:05:41.570 00:05:41.570 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.570 http://cunit.sourceforge.net/ 00:05:41.571 00:05:41.571 00:05:41.571 Suite: memory 00:05:41.571 Test: alloc and free memory map ...[2024-12-10 05:30:59.301484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:41.571 passed 00:05:41.571 Test: mem map translation ...[2024-12-10 05:30:59.320423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.571 [2024-12-10 05:30:59.320436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.571 [2024-12-10 05:30:59.320473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.571 [2024-12-10 05:30:59.320479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:41.571 passed 00:05:41.571 Test: mem map registration ...[2024-12-10 05:30:59.359782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:41.571 [2024-12-10 05:30:59.359796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:41.571 passed 00:05:41.571 Test: mem map adjacent registrations ...passed 00:05:41.571 00:05:41.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.571 suites 1 1 n/a 0 0 00:05:41.571 tests 4 4 4 0 0 00:05:41.571 asserts 152 152 152 0 n/a 00:05:41.571 00:05:41.571 Elapsed time = 0.135 seconds 00:05:41.571 00:05:41.571 real 0m0.148s 00:05:41.571 user 0m0.139s 00:05:41.571 sys 0m0.008s 00:05:41.571 05:30:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.571 05:30:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:41.571 ************************************ 00:05:41.571 END TEST env_memory 00:05:41.571 ************************************ 00:05:41.571 05:30:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:41.571 05:30:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.571 05:30:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.571 05:30:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.571 ************************************ 00:05:41.571 START TEST env_vtophys 00:05:41.571 ************************************ 00:05:41.571 05:30:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:41.571 EAL: lib.eal log level changed from notice to debug 00:05:41.571 EAL: Detected lcore 0 as core 0 on socket 0 00:05:41.571 EAL: Detected lcore 1 as core 1 on socket 0 00:05:41.571 EAL: Detected lcore 2 as core 2 on socket 0 00:05:41.571 EAL: Detected lcore 3 as core 3 on socket 0 00:05:41.571 EAL: Detected lcore 4 as core 4 on socket 0 00:05:41.571 EAL: Detected lcore 5 as core 5 on socket 0 00:05:41.571 EAL: Detected lcore 6 as core 6 on socket 0 00:05:41.571 EAL: Detected lcore 7 as core 8 on socket 0 00:05:41.571 EAL: Detected lcore 8 as core 9 on socket 0 00:05:41.571 EAL: Detected lcore 9 as core 10 on socket 0 00:05:41.571 EAL: Detected lcore 10 as core 11 on socket 0 00:05:41.571 EAL: Detected lcore 11 as core 12 on socket 0 00:05:41.571 EAL: Detected lcore 12 as core 13 on socket 0 00:05:41.571 EAL: Detected lcore 13 as core 16 on socket 0 00:05:41.571 EAL: Detected lcore 14 as core 17 on socket 0 00:05:41.571 EAL: Detected lcore 15 as core 18 on socket 0 00:05:41.571 EAL: Detected lcore 16 as core 19 on socket 0 00:05:41.571 EAL: Detected lcore 17 as core 20 on socket 0 00:05:41.571 EAL: Detected lcore 18 as core 21 on socket 0 00:05:41.571 EAL: Detected lcore 19 as core 25 on socket 0 00:05:41.571 EAL: Detected lcore 20 as core 26 on socket 0 00:05:41.571 EAL: Detected lcore 21 as core 27 on socket 0 00:05:41.571 EAL: Detected lcore 22 as core 28 on socket 0 00:05:41.571 EAL: Detected lcore 23 as core 29 on socket 0 00:05:41.571 EAL: Detected lcore 24 as core 0 on socket 1 00:05:41.571 EAL: Detected lcore 25 as core 1 on socket 1 00:05:41.571 EAL: Detected lcore 26 as core 2 on socket 1 00:05:41.571 EAL: Detected lcore 27 as core 3 on socket 1 00:05:41.571 EAL: Detected lcore 28 as core 4 on socket 1 00:05:41.571 EAL: Detected lcore 29 as core 5 on socket 1 00:05:41.571 EAL: Detected lcore 30 as core 6 on socket 1 00:05:41.571 EAL: Detected lcore 31 as core 8 on socket 1 00:05:41.571 EAL: Detected lcore 32 as core 9 on socket 1 00:05:41.571 EAL: Detected lcore 33 as core 10 on socket 1 00:05:41.571 EAL: Detected lcore 34 as core 11 on socket 1 00:05:41.571 EAL: Detected lcore 35 as core 12 on socket 1 00:05:41.571 EAL: Detected lcore 36 as core 13 on socket 1 00:05:41.571 EAL: Detected lcore 37 as core 16 on socket 1 00:05:41.571 EAL: Detected lcore 38 as core 17 on socket 1 00:05:41.571 EAL: Detected lcore 39 as core 18 on socket 1 00:05:41.571 EAL: Detected lcore 40 as core 19 on socket 1 00:05:41.571 EAL: Detected lcore 41 as core 20 on socket 1 00:05:41.571 EAL: Detected lcore 42 as core 21 on socket 1 00:05:41.571 EAL: Detected lcore 43 as core 25 on socket 1 00:05:41.571 EAL: Detected lcore 44 as core 26 on socket 1 00:05:41.571 EAL: Detected lcore 45 as core 27 on socket 1 00:05:41.571 EAL: Detected lcore 46 as core 28 on socket 1 00:05:41.571 EAL: Detected lcore 47 as core 29 on socket 1 00:05:41.571 EAL: Detected lcore 48 as core 0 on socket 0 00:05:41.571 EAL: Detected lcore 49 as core 1 on socket 0 00:05:41.571 EAL: Detected lcore 50 as core 2 on socket 0 00:05:41.571 EAL: Detected lcore 51 as core 3 on socket 0 00:05:41.571 EAL: Detected lcore 52 as core 4 on socket 0 00:05:41.571 EAL: Detected lcore 53 as core 5 on socket 0 00:05:41.571 EAL: Detected lcore 54 as core 6 on socket 0 00:05:41.571 EAL: Detected lcore 55 as core 8 on socket 0 00:05:41.571 EAL: Detected lcore 56 as core 9 on socket 0 00:05:41.571 EAL: Detected lcore 57 as core 10 on socket 0 00:05:41.571 EAL: Detected lcore 58 as core 11 on socket 0 00:05:41.571 EAL: Detected lcore 59 as core 12 on socket 0 00:05:41.571 EAL: Detected lcore 60 as core 13 on socket 0 00:05:41.571 EAL: Detected lcore 61 as core 16 on socket 0 00:05:41.571 EAL: Detected lcore 62 as core 17 on socket 0 00:05:41.571 EAL: Detected lcore 63 as core 18 on socket 0 00:05:41.571 EAL: Detected lcore 64 as core 19 on socket 0 00:05:41.571 EAL: Detected lcore 65 as core 20 on socket 0 00:05:41.571 EAL: Detected lcore 66 as core 21 on socket 0 00:05:41.571 EAL: Detected lcore 67 as core 25 on socket 0 00:05:41.571 EAL: Detected lcore 68 as core 26 on socket 0 00:05:41.571 EAL: Detected lcore 69 as core 27 on socket 0 00:05:41.571 EAL: Detected lcore 70 as core 28 on socket 0 00:05:41.571 EAL: Detected lcore 71 as core 29 on socket 0 00:05:41.571 EAL: Detected lcore 72 as core 0 on socket 1 00:05:41.571 EAL: Detected lcore 73 as core 1 on socket 1 00:05:41.571 EAL: Detected lcore 74 as core 2 on socket 1 00:05:41.571 EAL: Detected lcore 75 as core 3 on socket 1 00:05:41.571 EAL: Detected lcore 76 as core 4 on socket 1 00:05:41.571 EAL: Detected lcore 77 as core 5 on socket 1 00:05:41.571 EAL: Detected lcore 78 as core 6 on socket 1 00:05:41.571 EAL: Detected lcore 79 as core 8 on socket 1 00:05:41.571 EAL: Detected lcore 80 as core 9 on socket 1 00:05:41.571 EAL: Detected lcore 81 as core 10 on socket 1 00:05:41.571 EAL: Detected lcore 82 as core 11 on socket 1 00:05:41.571 EAL: Detected lcore 83 as core 12 on socket 1 00:05:41.571 EAL: Detected lcore 84 as core 13 on socket 1 00:05:41.571 EAL: Detected lcore 85 as core 16 on socket 1 00:05:41.571 EAL: Detected lcore 86 as core 17 on socket 1 00:05:41.571 EAL: Detected lcore 87 as core 18 on socket 1 00:05:41.571 EAL: Detected lcore 88 as core 19 on socket 1 00:05:41.571 EAL: Detected lcore 89 as core 20 on socket 1 00:05:41.571 EAL: Detected lcore 90 as core 21 on socket 1 00:05:41.571 EAL: Detected lcore 91 as core 25 on socket 1 00:05:41.571 EAL: Detected lcore 92 as core 26 on socket 1 00:05:41.571 EAL: Detected lcore 93 as core 27 on socket 1 00:05:41.571 EAL: Detected lcore 94 as core 28 on socket 1 00:05:41.571 EAL: Detected lcore 95 as core 29 on socket 1 00:05:41.571 EAL: Maximum logical cores by configuration: 128 00:05:41.571 EAL: Detected CPU lcores: 96 00:05:41.571 EAL: Detected NUMA nodes: 2 00:05:41.571 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:41.571 EAL: Detected shared linkage of DPDK 00:05:41.571 EAL: No shared files mode enabled, IPC will be disabled 00:05:41.831 EAL: Bus pci wants IOVA as 'DC' 00:05:41.831 EAL: Buses did not request a specific IOVA mode. 00:05:41.831 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:41.831 EAL: Selected IOVA mode 'VA' 00:05:41.831 EAL: Probing VFIO support... 00:05:41.831 EAL: IOMMU type 1 (Type 1) is supported 00:05:41.831 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:41.831 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:41.831 EAL: VFIO support initialized 00:05:41.831 EAL: Ask a virtual area of 0x2e000 bytes 00:05:41.831 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:41.831 EAL: Setting up physically contiguous memory... 00:05:41.831 EAL: Setting maximum number of open files to 524288 00:05:41.831 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:41.831 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:41.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:41.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:41.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.831 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:41.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:41.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.831 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:41.831 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:41.831 EAL: Hugepages will be freed exactly as allocated. 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: TSC frequency is ~2100000 KHz 00:05:41.831 EAL: Main lcore 0 is ready (tid=7ff8d5a7ca00;cpuset=[0]) 00:05:41.831 EAL: Trying to obtain current memory policy. 00:05:41.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.831 EAL: Restoring previous memory policy: 0 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: Heap on socket 0 was expanded by 2MB 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:41.831 EAL: Mem event callback 'spdk:(nil)' registered 00:05:41.831 00:05:41.831 00:05:41.831 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.831 http://cunit.sourceforge.net/ 00:05:41.831 00:05:41.831 00:05:41.831 Suite: components_suite 00:05:41.831 Test: vtophys_malloc_test ...passed 00:05:41.831 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:41.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.831 EAL: Restoring previous memory policy: 4 00:05:41.831 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: Heap on socket 0 was expanded by 4MB 00:05:41.831 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: Heap on socket 0 was shrunk by 4MB 00:05:41.831 EAL: Trying to obtain current memory policy. 00:05:41.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.831 EAL: Restoring previous memory policy: 4 00:05:41.831 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: Heap on socket 0 was expanded by 6MB 00:05:41.831 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: Heap on socket 0 was shrunk by 6MB 00:05:41.831 EAL: Trying to obtain current memory policy. 00:05:41.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.831 EAL: Restoring previous memory policy: 4 00:05:41.831 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.831 EAL: No shared files mode enabled, IPC is disabled 00:05:41.831 EAL: Heap on socket 0 was expanded by 10MB 00:05:41.831 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.831 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was shrunk by 10MB 00:05:41.832 EAL: Trying to obtain current memory policy. 00:05:41.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.832 EAL: Restoring previous memory policy: 4 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was expanded by 18MB 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was shrunk by 18MB 00:05:41.832 EAL: Trying to obtain current memory policy. 00:05:41.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.832 EAL: Restoring previous memory policy: 4 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was expanded by 34MB 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was shrunk by 34MB 00:05:41.832 EAL: Trying to obtain current memory policy. 00:05:41.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.832 EAL: Restoring previous memory policy: 4 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was expanded by 66MB 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was shrunk by 66MB 00:05:41.832 EAL: Trying to obtain current memory policy. 00:05:41.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.832 EAL: Restoring previous memory policy: 4 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was expanded by 130MB 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was shrunk by 130MB 00:05:41.832 EAL: Trying to obtain current memory policy. 00:05:41.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.832 EAL: Restoring previous memory policy: 4 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.832 EAL: request: mp_malloc_sync 00:05:41.832 EAL: No shared files mode enabled, IPC is disabled 00:05:41.832 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.832 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.091 EAL: request: mp_malloc_sync 00:05:42.091 EAL: No shared files mode enabled, IPC is disabled 00:05:42.091 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.091 EAL: Trying to obtain current memory policy. 00:05:42.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.091 EAL: Restoring previous memory policy: 4 00:05:42.091 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.091 EAL: request: mp_malloc_sync 00:05:42.091 EAL: No shared files mode enabled, IPC is disabled 00:05:42.091 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.091 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.350 EAL: request: mp_malloc_sync 00:05:42.350 EAL: No shared files mode enabled, IPC is disabled 00:05:42.350 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.350 EAL: Trying to obtain current memory policy. 00:05:42.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.350 EAL: Restoring previous memory policy: 4 00:05:42.350 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.350 EAL: request: mp_malloc_sync 00:05:42.350 EAL: No shared files mode enabled, IPC is disabled 00:05:42.350 EAL: Heap on socket 0 was expanded by 1026MB 00:05:42.608 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.866 EAL: request: mp_malloc_sync 00:05:42.866 EAL: No shared files mode enabled, IPC is disabled 00:05:42.866 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:42.866 passed 00:05:42.866 00:05:42.866 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.866 suites 1 1 n/a 0 0 00:05:42.866 tests 2 2 2 0 0 00:05:42.866 asserts 497 497 497 0 n/a 00:05:42.866 00:05:42.866 Elapsed time = 0.977 seconds 00:05:42.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.866 EAL: request: mp_malloc_sync 00:05:42.866 EAL: No shared files mode enabled, IPC is disabled 00:05:42.866 EAL: Heap on socket 0 was shrunk by 2MB 00:05:42.866 EAL: No shared files mode enabled, IPC is disabled 00:05:42.866 EAL: No shared files mode enabled, IPC is disabled 00:05:42.866 EAL: No shared files mode enabled, IPC is disabled 00:05:42.866 00:05:42.866 real 0m1.118s 00:05:42.866 user 0m0.654s 00:05:42.866 sys 0m0.435s 00:05:42.866 05:31:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.866 05:31:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:42.866 ************************************ 00:05:42.866 END TEST env_vtophys 00:05:42.866 ************************************ 00:05:42.866 05:31:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:42.866 05:31:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.867 05:31:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.867 05:31:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.867 ************************************ 00:05:42.867 START TEST env_pci 00:05:42.867 ************************************ 00:05:42.867 05:31:00 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:42.867 00:05:42.867 00:05:42.867 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.867 http://cunit.sourceforge.net/ 00:05:42.867 00:05:42.867 00:05:42.867 Suite: pci 00:05:42.867 Test: pci_hook ...[2024-12-10 05:31:00.684734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4120729 has claimed it 00:05:42.867 EAL: Cannot find device (10000:00:01.0) 00:05:42.867 EAL: Failed to attach device on primary process 00:05:42.867 passed 00:05:42.867 00:05:42.867 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.867 suites 1 1 n/a 0 0 00:05:42.867 tests 1 1 1 0 0 00:05:42.867 asserts 25 25 25 0 n/a 00:05:42.867 00:05:42.867 Elapsed time = 0.030 seconds 00:05:42.867 00:05:42.867 real 0m0.049s 00:05:42.867 user 0m0.013s 00:05:42.867 sys 0m0.036s 00:05:42.867 05:31:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.867 05:31:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:42.867 ************************************ 00:05:42.867 END TEST env_pci 00:05:42.867 ************************************ 00:05:42.867 05:31:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:42.867 05:31:00 env -- env/env.sh@15 -- # uname 00:05:42.867 05:31:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:42.867 05:31:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:42.867 05:31:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.867 05:31:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:42.867 05:31:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.867 05:31:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.867 ************************************ 00:05:42.867 START TEST env_dpdk_post_init 00:05:42.867 ************************************ 00:05:42.867 05:31:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.126 EAL: Detected CPU lcores: 96 00:05:43.126 EAL: Detected NUMA nodes: 2 00:05:43.126 EAL: Detected shared linkage of DPDK 00:05:43.126 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.126 EAL: Selected IOVA mode 'VA' 00:05:43.126 EAL: VFIO support initialized 00:05:43.126 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.126 EAL: Using IOMMU type 1 (Type 1) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:43.126 EAL: Ignore mapping IO port bar(1) 00:05:43.126 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:44.062 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:44.062 EAL: Ignore mapping IO port bar(1) 00:05:44.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:47.347 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:47.347 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:47.347 Starting DPDK initialization... 00:05:47.347 Starting SPDK post initialization... 00:05:47.347 SPDK NVMe probe 00:05:47.347 Attaching to 0000:5e:00.0 00:05:47.347 Attached to 0000:5e:00.0 00:05:47.347 Cleaning up... 00:05:47.347 00:05:47.347 real 0m4.379s 00:05:47.347 user 0m2.966s 00:05:47.347 sys 0m0.484s 00:05:47.347 05:31:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.347 05:31:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.347 ************************************ 00:05:47.347 END TEST env_dpdk_post_init 00:05:47.347 ************************************ 00:05:47.347 05:31:05 env -- env/env.sh@26 -- # uname 00:05:47.347 05:31:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.347 05:31:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.347 05:31:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.347 05:31:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.347 05:31:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.347 ************************************ 00:05:47.348 START TEST env_mem_callbacks 00:05:47.348 ************************************ 00:05:47.348 05:31:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.348 EAL: Detected CPU lcores: 96 00:05:47.348 EAL: Detected NUMA nodes: 2 00:05:47.348 EAL: Detected shared linkage of DPDK 00:05:47.348 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.348 EAL: Selected IOVA mode 'VA' 00:05:47.348 EAL: VFIO support initialized 00:05:47.348 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.348 00:05:47.348 00:05:47.348 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.348 http://cunit.sourceforge.net/ 00:05:47.348 00:05:47.348 00:05:47.348 Suite: memory 00:05:47.348 Test: test ... 00:05:47.348 register 0x200000200000 2097152 00:05:47.348 malloc 3145728 00:05:47.348 register 0x200000400000 4194304 00:05:47.348 buf 0x200000500000 len 3145728 PASSED 00:05:47.348 malloc 64 00:05:47.348 buf 0x2000004fff40 len 64 PASSED 00:05:47.348 malloc 4194304 00:05:47.606 register 0x200000800000 6291456 00:05:47.606 buf 0x200000a00000 len 4194304 PASSED 00:05:47.606 free 0x200000500000 3145728 00:05:47.606 free 0x2000004fff40 64 00:05:47.606 unregister 0x200000400000 4194304 PASSED 00:05:47.606 free 0x200000a00000 4194304 00:05:47.606 unregister 0x200000800000 6291456 PASSED 00:05:47.606 malloc 8388608 00:05:47.606 register 0x200000400000 10485760 00:05:47.607 buf 0x200000600000 len 8388608 PASSED 00:05:47.607 free 0x200000600000 8388608 00:05:47.607 unregister 0x200000400000 10485760 PASSED 00:05:47.607 passed 00:05:47.607 00:05:47.607 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.607 suites 1 1 n/a 0 0 00:05:47.607 tests 1 1 1 0 0 00:05:47.607 asserts 15 15 15 0 n/a 00:05:47.607 00:05:47.607 Elapsed time = 0.007 seconds 00:05:47.607 00:05:47.607 real 0m0.059s 00:05:47.607 user 0m0.022s 00:05:47.607 sys 0m0.037s 00:05:47.607 05:31:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.607 05:31:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.607 ************************************ 00:05:47.607 END TEST env_mem_callbacks 00:05:47.607 ************************************ 00:05:47.607 00:05:47.607 real 0m6.296s 00:05:47.607 user 0m4.051s 00:05:47.607 sys 0m1.319s 00:05:47.607 05:31:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.607 05:31:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.607 ************************************ 00:05:47.607 END TEST env 00:05:47.607 ************************************ 00:05:47.607 05:31:05 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.607 05:31:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.607 05:31:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.607 05:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.607 ************************************ 00:05:47.607 START TEST rpc 00:05:47.607 ************************************ 00:05:47.607 05:31:05 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.607 * Looking for test storage... 00:05:47.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.607 05:31:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.607 05:31:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.607 05:31:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.865 05:31:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.865 05:31:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.866 05:31:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.866 05:31:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.866 05:31:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.866 05:31:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.866 05:31:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.866 05:31:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.866 05:31:05 rpc -- scripts/common.sh@345 -- # : 1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.866 05:31:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.866 05:31:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.866 05:31:05 rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.866 05:31:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.866 05:31:05 rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.866 05:31:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.866 05:31:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.866 05:31:05 rpc -- scripts/common.sh@368 -- # return 0 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.866 --rc genhtml_branch_coverage=1 00:05:47.866 --rc genhtml_function_coverage=1 00:05:47.866 --rc genhtml_legend=1 00:05:47.866 --rc geninfo_all_blocks=1 00:05:47.866 --rc geninfo_unexecuted_blocks=1 00:05:47.866 00:05:47.866 ' 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.866 --rc genhtml_branch_coverage=1 00:05:47.866 --rc genhtml_function_coverage=1 00:05:47.866 --rc genhtml_legend=1 00:05:47.866 --rc geninfo_all_blocks=1 00:05:47.866 --rc geninfo_unexecuted_blocks=1 00:05:47.866 00:05:47.866 ' 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.866 --rc genhtml_branch_coverage=1 00:05:47.866 --rc genhtml_function_coverage=1 00:05:47.866 --rc genhtml_legend=1 00:05:47.866 --rc geninfo_all_blocks=1 00:05:47.866 --rc geninfo_unexecuted_blocks=1 00:05:47.866 00:05:47.866 ' 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.866 --rc genhtml_branch_coverage=1 00:05:47.866 --rc genhtml_function_coverage=1 00:05:47.866 --rc genhtml_legend=1 00:05:47.866 --rc geninfo_all_blocks=1 00:05:47.866 --rc geninfo_unexecuted_blocks=1 00:05:47.866 00:05:47.866 ' 00:05:47.866 05:31:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4121584 00:05:47.866 05:31:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:47.866 05:31:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.866 05:31:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4121584 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 4121584 ']' 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.866 05:31:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.866 [2024-12-10 05:31:05.642874] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:05:47.866 [2024-12-10 05:31:05.642922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4121584 ] 00:05:47.866 [2024-12-10 05:31:05.724539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.866 [2024-12-10 05:31:05.761906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.866 [2024-12-10 05:31:05.761944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4121584' to capture a snapshot of events at runtime. 00:05:47.866 [2024-12-10 05:31:05.761951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.866 [2024-12-10 05:31:05.761956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.866 [2024-12-10 05:31:05.761961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4121584 for offline analysis/debug. 00:05:47.866 [2024-12-10 05:31:05.762493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.801 05:31:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.801 05:31:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.801 05:31:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.802 05:31:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.802 05:31:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.802 05:31:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.802 05:31:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.802 05:31:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.802 05:31:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 ************************************ 00:05:48.802 START TEST rpc_integrity 00:05:48.802 ************************************ 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.802 { 00:05:48.802 "name": "Malloc0", 00:05:48.802 "aliases": [ 00:05:48.802 "8894db84-ac10-44fb-af18-64d2b1cd3492" 00:05:48.802 ], 00:05:48.802 "product_name": "Malloc disk", 00:05:48.802 "block_size": 512, 00:05:48.802 "num_blocks": 16384, 00:05:48.802 "uuid": "8894db84-ac10-44fb-af18-64d2b1cd3492", 00:05:48.802 "assigned_rate_limits": { 00:05:48.802 "rw_ios_per_sec": 0, 00:05:48.802 "rw_mbytes_per_sec": 0, 00:05:48.802 "r_mbytes_per_sec": 0, 00:05:48.802 "w_mbytes_per_sec": 0 00:05:48.802 }, 00:05:48.802 "claimed": false, 00:05:48.802 "zoned": false, 00:05:48.802 "supported_io_types": { 00:05:48.802 "read": true, 00:05:48.802 "write": true, 00:05:48.802 "unmap": true, 00:05:48.802 "flush": true, 00:05:48.802 "reset": true, 00:05:48.802 "nvme_admin": false, 00:05:48.802 "nvme_io": false, 00:05:48.802 "nvme_io_md": false, 00:05:48.802 "write_zeroes": true, 00:05:48.802 "zcopy": true, 00:05:48.802 "get_zone_info": false, 00:05:48.802 "zone_management": false, 00:05:48.802 "zone_append": false, 00:05:48.802 "compare": false, 00:05:48.802 "compare_and_write": false, 00:05:48.802 "abort": true, 00:05:48.802 "seek_hole": false, 00:05:48.802 "seek_data": false, 00:05:48.802 "copy": true, 00:05:48.802 "nvme_iov_md": false 00:05:48.802 }, 00:05:48.802 "memory_domains": [ 00:05:48.802 { 00:05:48.802 "dma_device_id": "system", 00:05:48.802 "dma_device_type": 1 00:05:48.802 }, 00:05:48.802 { 00:05:48.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.802 "dma_device_type": 2 00:05:48.802 } 00:05:48.802 ], 00:05:48.802 "driver_specific": {} 00:05:48.802 } 00:05:48.802 ]' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 [2024-12-10 05:31:06.636594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.802 [2024-12-10 05:31:06.636623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.802 [2024-12-10 05:31:06.636635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1392ac0 00:05:48.802 [2024-12-10 05:31:06.636641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.802 [2024-12-10 05:31:06.637703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.802 [2024-12-10 05:31:06.637725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.802 Passthru0 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.802 { 00:05:48.802 "name": "Malloc0", 00:05:48.802 "aliases": [ 00:05:48.802 "8894db84-ac10-44fb-af18-64d2b1cd3492" 00:05:48.802 ], 00:05:48.802 "product_name": "Malloc disk", 00:05:48.802 "block_size": 512, 00:05:48.802 "num_blocks": 16384, 00:05:48.802 "uuid": "8894db84-ac10-44fb-af18-64d2b1cd3492", 00:05:48.802 "assigned_rate_limits": { 00:05:48.802 "rw_ios_per_sec": 0, 00:05:48.802 "rw_mbytes_per_sec": 0, 00:05:48.802 "r_mbytes_per_sec": 0, 00:05:48.802 "w_mbytes_per_sec": 0 00:05:48.802 }, 00:05:48.802 "claimed": true, 00:05:48.802 "claim_type": "exclusive_write", 00:05:48.802 "zoned": false, 00:05:48.802 "supported_io_types": { 00:05:48.802 "read": true, 00:05:48.802 "write": true, 00:05:48.802 "unmap": true, 00:05:48.802 "flush": true, 00:05:48.802 "reset": true, 00:05:48.802 "nvme_admin": false, 00:05:48.802 "nvme_io": false, 00:05:48.802 "nvme_io_md": false, 00:05:48.802 "write_zeroes": true, 00:05:48.802 "zcopy": true, 00:05:48.802 "get_zone_info": false, 00:05:48.802 "zone_management": false, 00:05:48.802 "zone_append": false, 00:05:48.802 "compare": false, 00:05:48.802 "compare_and_write": false, 00:05:48.802 "abort": true, 00:05:48.802 "seek_hole": false, 00:05:48.802 "seek_data": false, 00:05:48.802 "copy": true, 00:05:48.802 "nvme_iov_md": false 00:05:48.802 }, 00:05:48.802 "memory_domains": [ 00:05:48.802 { 00:05:48.802 "dma_device_id": "system", 00:05:48.802 "dma_device_type": 1 00:05:48.802 }, 00:05:48.802 { 00:05:48.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.802 "dma_device_type": 2 00:05:48.802 } 00:05:48.802 ], 00:05:48.802 "driver_specific": {} 00:05:48.802 }, 00:05:48.802 { 00:05:48.802 "name": "Passthru0", 00:05:48.802 "aliases": [ 00:05:48.802 "78a559b5-02b3-5eb0-9e08-384e1e50346b" 00:05:48.802 ], 00:05:48.802 "product_name": "passthru", 00:05:48.802 "block_size": 512, 00:05:48.802 "num_blocks": 16384, 00:05:48.802 "uuid": "78a559b5-02b3-5eb0-9e08-384e1e50346b", 00:05:48.802 "assigned_rate_limits": { 00:05:48.802 "rw_ios_per_sec": 0, 00:05:48.802 "rw_mbytes_per_sec": 0, 00:05:48.802 "r_mbytes_per_sec": 0, 00:05:48.802 "w_mbytes_per_sec": 0 00:05:48.802 }, 00:05:48.802 "claimed": false, 00:05:48.802 "zoned": false, 00:05:48.802 "supported_io_types": { 00:05:48.802 "read": true, 00:05:48.802 "write": true, 00:05:48.802 "unmap": true, 00:05:48.802 "flush": true, 00:05:48.802 "reset": true, 00:05:48.802 "nvme_admin": false, 00:05:48.802 "nvme_io": false, 00:05:48.802 "nvme_io_md": false, 00:05:48.802 "write_zeroes": true, 00:05:48.802 "zcopy": true, 00:05:48.802 "get_zone_info": false, 00:05:48.802 "zone_management": false, 00:05:48.802 "zone_append": false, 00:05:48.802 "compare": false, 00:05:48.802 "compare_and_write": false, 00:05:48.802 "abort": true, 00:05:48.802 "seek_hole": false, 00:05:48.802 "seek_data": false, 00:05:48.802 "copy": true, 00:05:48.802 "nvme_iov_md": false 00:05:48.802 }, 00:05:48.802 "memory_domains": [ 00:05:48.802 { 00:05:48.802 "dma_device_id": "system", 00:05:48.802 "dma_device_type": 1 00:05:48.802 }, 00:05:48.802 { 00:05:48.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.802 "dma_device_type": 2 00:05:48.802 } 00:05:48.802 ], 00:05:48.802 "driver_specific": { 00:05:48.802 "passthru": { 00:05:48.802 "name": "Passthru0", 00:05:48.802 "base_bdev_name": "Malloc0" 00:05:48.802 } 00:05:48.802 } 00:05:48.802 } 00:05:48.802 ]' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.802 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.802 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.061 05:31:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.061 00:05:49.061 real 0m0.272s 00:05:49.061 user 0m0.171s 00:05:49.061 sys 0m0.034s 00:05:49.061 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.061 05:31:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.061 ************************************ 00:05:49.061 END TEST rpc_integrity 00:05:49.061 ************************************ 00:05:49.061 05:31:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.061 05:31:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.061 05:31:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.061 05:31:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.061 ************************************ 00:05:49.061 START TEST rpc_plugins 00:05:49.061 ************************************ 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.061 { 00:05:49.061 "name": "Malloc1", 00:05:49.061 "aliases": [ 00:05:49.061 "400f08be-2347-41eb-b711-d272857fc241" 00:05:49.061 ], 00:05:49.061 "product_name": "Malloc disk", 00:05:49.061 "block_size": 4096, 00:05:49.061 "num_blocks": 256, 00:05:49.061 "uuid": "400f08be-2347-41eb-b711-d272857fc241", 00:05:49.061 "assigned_rate_limits": { 00:05:49.061 "rw_ios_per_sec": 0, 00:05:49.061 "rw_mbytes_per_sec": 0, 00:05:49.061 "r_mbytes_per_sec": 0, 00:05:49.061 "w_mbytes_per_sec": 0 00:05:49.061 }, 00:05:49.061 "claimed": false, 00:05:49.061 "zoned": false, 00:05:49.061 "supported_io_types": { 00:05:49.061 "read": true, 00:05:49.061 "write": true, 00:05:49.061 "unmap": true, 00:05:49.061 "flush": true, 00:05:49.061 "reset": true, 00:05:49.061 "nvme_admin": false, 00:05:49.061 "nvme_io": false, 00:05:49.061 "nvme_io_md": false, 00:05:49.061 "write_zeroes": true, 00:05:49.061 "zcopy": true, 00:05:49.061 "get_zone_info": false, 00:05:49.061 "zone_management": false, 00:05:49.061 "zone_append": false, 00:05:49.061 "compare": false, 00:05:49.061 "compare_and_write": false, 00:05:49.061 "abort": true, 00:05:49.061 "seek_hole": false, 00:05:49.061 "seek_data": false, 00:05:49.061 "copy": true, 00:05:49.061 "nvme_iov_md": false 00:05:49.061 }, 00:05:49.061 "memory_domains": [ 00:05:49.061 { 00:05:49.061 "dma_device_id": "system", 00:05:49.061 "dma_device_type": 1 00:05:49.061 }, 00:05:49.061 { 00:05:49.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.061 "dma_device_type": 2 00:05:49.061 } 00:05:49.061 ], 00:05:49.061 "driver_specific": {} 00:05:49.061 } 00:05:49.061 ]' 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.061 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.061 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.062 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.062 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.062 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.062 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.062 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.062 05:31:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.062 00:05:49.062 real 0m0.144s 00:05:49.062 user 0m0.090s 00:05:49.062 sys 0m0.016s 00:05:49.062 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.062 05:31:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.062 ************************************ 00:05:49.062 END TEST rpc_plugins 00:05:49.062 ************************************ 00:05:49.320 05:31:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.320 05:31:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.320 05:31:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.320 05:31:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.320 ************************************ 00:05:49.320 START TEST rpc_trace_cmd_test 00:05:49.320 ************************************ 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.320 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4121584", 00:05:49.320 "tpoint_group_mask": "0x8", 00:05:49.320 "iscsi_conn": { 00:05:49.320 "mask": "0x2", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "scsi": { 00:05:49.320 "mask": "0x4", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "bdev": { 00:05:49.320 "mask": "0x8", 00:05:49.320 "tpoint_mask": "0xffffffffffffffff" 00:05:49.320 }, 00:05:49.320 "nvmf_rdma": { 00:05:49.320 "mask": "0x10", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "nvmf_tcp": { 00:05:49.320 "mask": "0x20", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "ftl": { 00:05:49.320 "mask": "0x40", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "blobfs": { 00:05:49.320 "mask": "0x80", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "dsa": { 00:05:49.320 "mask": "0x200", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "thread": { 00:05:49.320 "mask": "0x400", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "nvme_pcie": { 00:05:49.320 "mask": "0x800", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "iaa": { 00:05:49.320 "mask": "0x1000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "nvme_tcp": { 00:05:49.320 "mask": "0x2000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "bdev_nvme": { 00:05:49.320 "mask": "0x4000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "sock": { 00:05:49.320 "mask": "0x8000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "blob": { 00:05:49.320 "mask": "0x10000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "bdev_raid": { 00:05:49.320 "mask": "0x20000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 }, 00:05:49.320 "scheduler": { 00:05:49.320 "mask": "0x40000", 00:05:49.320 "tpoint_mask": "0x0" 00:05:49.320 } 00:05:49.320 }' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.320 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.579 05:31:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.579 00:05:49.579 real 0m0.231s 00:05:49.579 user 0m0.198s 00:05:49.579 sys 0m0.026s 00:05:49.579 05:31:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.579 05:31:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.579 ************************************ 00:05:49.579 END TEST rpc_trace_cmd_test 00:05:49.579 ************************************ 00:05:49.579 05:31:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.579 05:31:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.579 05:31:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.579 05:31:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.579 05:31:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.579 05:31:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.579 ************************************ 00:05:49.579 START TEST rpc_daemon_integrity 00:05:49.579 ************************************ 00:05:49.579 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.580 { 00:05:49.580 "name": "Malloc2", 00:05:49.580 "aliases": [ 00:05:49.580 "5f539435-5570-49b5-b3b9-0731b3c8d23c" 00:05:49.580 ], 00:05:49.580 "product_name": "Malloc disk", 00:05:49.580 "block_size": 512, 00:05:49.580 "num_blocks": 16384, 00:05:49.580 "uuid": "5f539435-5570-49b5-b3b9-0731b3c8d23c", 00:05:49.580 "assigned_rate_limits": { 00:05:49.580 "rw_ios_per_sec": 0, 00:05:49.580 "rw_mbytes_per_sec": 0, 00:05:49.580 "r_mbytes_per_sec": 0, 00:05:49.580 "w_mbytes_per_sec": 0 00:05:49.580 }, 00:05:49.580 "claimed": false, 00:05:49.580 "zoned": false, 00:05:49.580 "supported_io_types": { 00:05:49.580 "read": true, 00:05:49.580 "write": true, 00:05:49.580 "unmap": true, 00:05:49.580 "flush": true, 00:05:49.580 "reset": true, 00:05:49.580 "nvme_admin": false, 00:05:49.580 "nvme_io": false, 00:05:49.580 "nvme_io_md": false, 00:05:49.580 "write_zeroes": true, 00:05:49.580 "zcopy": true, 00:05:49.580 "get_zone_info": false, 00:05:49.580 "zone_management": false, 00:05:49.580 "zone_append": false, 00:05:49.580 "compare": false, 00:05:49.580 "compare_and_write": false, 00:05:49.580 "abort": true, 00:05:49.580 "seek_hole": false, 00:05:49.580 "seek_data": false, 00:05:49.580 "copy": true, 00:05:49.580 "nvme_iov_md": false 00:05:49.580 }, 00:05:49.580 "memory_domains": [ 00:05:49.580 { 00:05:49.580 "dma_device_id": "system", 00:05:49.580 "dma_device_type": 1 00:05:49.580 }, 00:05:49.580 { 00:05:49.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.580 "dma_device_type": 2 00:05:49.580 } 00:05:49.580 ], 00:05:49.580 "driver_specific": {} 00:05:49.580 } 00:05:49.580 ]' 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.580 [2024-12-10 05:31:07.490904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.580 [2024-12-10 05:31:07.490935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.580 [2024-12-10 05:31:07.490949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x124ed10 00:05:49.580 [2024-12-10 05:31:07.490955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.580 [2024-12-10 05:31:07.491927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.580 [2024-12-10 05:31:07.491949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.580 Passthru0 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.580 { 00:05:49.580 "name": "Malloc2", 00:05:49.580 "aliases": [ 00:05:49.580 "5f539435-5570-49b5-b3b9-0731b3c8d23c" 00:05:49.580 ], 00:05:49.580 "product_name": "Malloc disk", 00:05:49.580 "block_size": 512, 00:05:49.580 "num_blocks": 16384, 00:05:49.580 "uuid": "5f539435-5570-49b5-b3b9-0731b3c8d23c", 00:05:49.580 "assigned_rate_limits": { 00:05:49.580 "rw_ios_per_sec": 0, 00:05:49.580 "rw_mbytes_per_sec": 0, 00:05:49.580 "r_mbytes_per_sec": 0, 00:05:49.580 "w_mbytes_per_sec": 0 00:05:49.580 }, 00:05:49.580 "claimed": true, 00:05:49.580 "claim_type": "exclusive_write", 00:05:49.580 "zoned": false, 00:05:49.580 "supported_io_types": { 00:05:49.580 "read": true, 00:05:49.580 "write": true, 00:05:49.580 "unmap": true, 00:05:49.580 "flush": true, 00:05:49.580 "reset": true, 00:05:49.580 "nvme_admin": false, 00:05:49.580 "nvme_io": false, 00:05:49.580 "nvme_io_md": false, 00:05:49.580 "write_zeroes": true, 00:05:49.580 "zcopy": true, 00:05:49.580 "get_zone_info": false, 00:05:49.580 "zone_management": false, 00:05:49.580 "zone_append": false, 00:05:49.580 "compare": false, 00:05:49.580 "compare_and_write": false, 00:05:49.580 "abort": true, 00:05:49.580 "seek_hole": false, 00:05:49.580 "seek_data": false, 00:05:49.580 "copy": true, 00:05:49.580 "nvme_iov_md": false 00:05:49.580 }, 00:05:49.580 "memory_domains": [ 00:05:49.580 { 00:05:49.580 "dma_device_id": "system", 00:05:49.580 "dma_device_type": 1 00:05:49.580 }, 00:05:49.580 { 00:05:49.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.580 "dma_device_type": 2 00:05:49.580 } 00:05:49.580 ], 00:05:49.580 "driver_specific": {} 00:05:49.580 }, 00:05:49.580 { 00:05:49.580 "name": "Passthru0", 00:05:49.580 "aliases": [ 00:05:49.580 "7dc5c894-c16b-5943-ad6f-f84c96c14e6b" 00:05:49.580 ], 00:05:49.580 "product_name": "passthru", 00:05:49.580 "block_size": 512, 00:05:49.580 "num_blocks": 16384, 00:05:49.580 "uuid": "7dc5c894-c16b-5943-ad6f-f84c96c14e6b", 00:05:49.580 "assigned_rate_limits": { 00:05:49.580 "rw_ios_per_sec": 0, 00:05:49.580 "rw_mbytes_per_sec": 0, 00:05:49.580 "r_mbytes_per_sec": 0, 00:05:49.580 "w_mbytes_per_sec": 0 00:05:49.580 }, 00:05:49.580 "claimed": false, 00:05:49.580 "zoned": false, 00:05:49.580 "supported_io_types": { 00:05:49.580 "read": true, 00:05:49.580 "write": true, 00:05:49.580 "unmap": true, 00:05:49.580 "flush": true, 00:05:49.580 "reset": true, 00:05:49.580 "nvme_admin": false, 00:05:49.580 "nvme_io": false, 00:05:49.580 "nvme_io_md": false, 00:05:49.580 "write_zeroes": true, 00:05:49.580 "zcopy": true, 00:05:49.580 "get_zone_info": false, 00:05:49.580 "zone_management": false, 00:05:49.580 "zone_append": false, 00:05:49.580 "compare": false, 00:05:49.580 "compare_and_write": false, 00:05:49.580 "abort": true, 00:05:49.580 "seek_hole": false, 00:05:49.580 "seek_data": false, 00:05:49.580 "copy": true, 00:05:49.580 "nvme_iov_md": false 00:05:49.580 }, 00:05:49.580 "memory_domains": [ 00:05:49.580 { 00:05:49.580 "dma_device_id": "system", 00:05:49.580 "dma_device_type": 1 00:05:49.580 }, 00:05:49.580 { 00:05:49.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.580 "dma_device_type": 2 00:05:49.580 } 00:05:49.580 ], 00:05:49.580 "driver_specific": { 00:05:49.580 "passthru": { 00:05:49.580 "name": "Passthru0", 00:05:49.580 "base_bdev_name": "Malloc2" 00:05:49.580 } 00:05:49.580 } 00:05:49.580 } 00:05:49.580 ]' 00:05:49.580 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.839 00:05:49.839 real 0m0.270s 00:05:49.839 user 0m0.179s 00:05:49.839 sys 0m0.034s 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.839 05:31:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.839 ************************************ 00:05:49.839 END TEST rpc_daemon_integrity 00:05:49.839 ************************************ 00:05:49.839 05:31:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.839 05:31:07 rpc -- rpc/rpc.sh@84 -- # killprocess 4121584 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 4121584 ']' 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@958 -- # kill -0 4121584 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4121584 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4121584' 00:05:49.839 killing process with pid 4121584 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@973 -- # kill 4121584 00:05:49.839 05:31:07 rpc -- common/autotest_common.sh@978 -- # wait 4121584 00:05:50.098 00:05:50.098 real 0m2.596s 00:05:50.098 user 0m3.325s 00:05:50.098 sys 0m0.724s 00:05:50.098 05:31:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.098 05:31:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.098 ************************************ 00:05:50.098 END TEST rpc 00:05:50.098 ************************************ 00:05:50.098 05:31:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.098 05:31:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.098 05:31:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.098 05:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.357 ************************************ 00:05:50.357 START TEST skip_rpc 00:05:50.357 ************************************ 00:05:50.357 05:31:08 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.357 * Looking for test storage... 00:05:50.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.357 05:31:08 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.357 05:31:08 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.357 05:31:08 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.357 05:31:08 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.357 05:31:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.357 05:31:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.357 05:31:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.358 05:31:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.358 --rc genhtml_branch_coverage=1 00:05:50.358 --rc genhtml_function_coverage=1 00:05:50.358 --rc genhtml_legend=1 00:05:50.358 --rc geninfo_all_blocks=1 00:05:50.358 --rc geninfo_unexecuted_blocks=1 00:05:50.358 00:05:50.358 ' 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.358 --rc genhtml_branch_coverage=1 00:05:50.358 --rc genhtml_function_coverage=1 00:05:50.358 --rc genhtml_legend=1 00:05:50.358 --rc geninfo_all_blocks=1 00:05:50.358 --rc geninfo_unexecuted_blocks=1 00:05:50.358 00:05:50.358 ' 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.358 --rc genhtml_branch_coverage=1 00:05:50.358 --rc genhtml_function_coverage=1 00:05:50.358 --rc genhtml_legend=1 00:05:50.358 --rc geninfo_all_blocks=1 00:05:50.358 --rc geninfo_unexecuted_blocks=1 00:05:50.358 00:05:50.358 ' 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.358 --rc genhtml_branch_coverage=1 00:05:50.358 --rc genhtml_function_coverage=1 00:05:50.358 --rc genhtml_legend=1 00:05:50.358 --rc geninfo_all_blocks=1 00:05:50.358 --rc geninfo_unexecuted_blocks=1 00:05:50.358 00:05:50.358 ' 00:05:50.358 05:31:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.358 05:31:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:50.358 05:31:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.358 05:31:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.358 ************************************ 00:05:50.358 START TEST skip_rpc 00:05:50.358 ************************************ 00:05:50.358 05:31:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:50.358 05:31:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4122223 00:05:50.358 05:31:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:50.358 05:31:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.358 05:31:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:50.617 [2024-12-10 05:31:08.347273] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:05:50.617 [2024-12-10 05:31:08.347316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122223 ] 00:05:50.617 [2024-12-10 05:31:08.427527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.617 [2024-12-10 05:31:08.465760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4122223 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 4122223 ']' 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 4122223 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4122223 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4122223' 00:05:55.884 killing process with pid 4122223 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 4122223 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 4122223 00:05:55.884 00:05:55.884 real 0m5.362s 00:05:55.884 user 0m5.114s 00:05:55.884 sys 0m0.287s 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.884 05:31:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.884 ************************************ 00:05:55.884 END TEST skip_rpc 00:05:55.884 ************************************ 00:05:55.884 05:31:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:55.884 05:31:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.884 05:31:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.884 05:31:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.884 ************************************ 00:05:55.884 START TEST skip_rpc_with_json 00:05:55.884 ************************************ 00:05:55.884 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:55.884 05:31:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:55.884 05:31:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4123158 00:05:55.884 05:31:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4123158 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4123158 ']' 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.885 05:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.885 [2024-12-10 05:31:13.778228] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:05:55.885 [2024-12-10 05:31:13.778269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123158 ] 00:05:56.143 [2024-12-10 05:31:13.857090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.143 [2024-12-10 05:31:13.897339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 [2024-12-10 05:31:14.116952] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:56.406 request: 00:05:56.406 { 00:05:56.406 "trtype": "tcp", 00:05:56.406 "method": "nvmf_get_transports", 00:05:56.406 "req_id": 1 00:05:56.406 } 00:05:56.406 Got JSON-RPC error response 00:05:56.406 response: 00:05:56.406 { 00:05:56.406 "code": -19, 00:05:56.406 "message": "No such device" 00:05:56.406 } 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 [2024-12-10 05:31:14.129056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.406 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.406 { 00:05:56.406 "subsystems": [ 00:05:56.406 { 00:05:56.406 "subsystem": "fsdev", 00:05:56.406 "config": [ 00:05:56.406 { 00:05:56.406 "method": "fsdev_set_opts", 00:05:56.406 "params": { 00:05:56.406 "fsdev_io_pool_size": 65535, 00:05:56.406 "fsdev_io_cache_size": 256 00:05:56.406 } 00:05:56.406 } 00:05:56.406 ] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "vfio_user_target", 00:05:56.406 "config": null 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "keyring", 00:05:56.406 "config": [] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "iobuf", 00:05:56.406 "config": [ 00:05:56.406 { 00:05:56.406 "method": "iobuf_set_options", 00:05:56.406 "params": { 00:05:56.406 "small_pool_count": 8192, 00:05:56.406 "large_pool_count": 1024, 00:05:56.406 "small_bufsize": 8192, 00:05:56.406 "large_bufsize": 135168, 00:05:56.406 "enable_numa": false 00:05:56.406 } 00:05:56.406 } 00:05:56.406 ] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "sock", 00:05:56.406 "config": [ 00:05:56.406 { 00:05:56.406 "method": "sock_set_default_impl", 00:05:56.406 "params": { 00:05:56.406 "impl_name": "posix" 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "sock_impl_set_options", 00:05:56.406 "params": { 00:05:56.406 "impl_name": "ssl", 00:05:56.406 "recv_buf_size": 4096, 00:05:56.406 "send_buf_size": 4096, 00:05:56.406 "enable_recv_pipe": true, 00:05:56.406 "enable_quickack": false, 00:05:56.406 "enable_placement_id": 0, 00:05:56.406 "enable_zerocopy_send_server": true, 00:05:56.406 "enable_zerocopy_send_client": false, 00:05:56.406 "zerocopy_threshold": 0, 00:05:56.406 "tls_version": 0, 00:05:56.406 "enable_ktls": false 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "sock_impl_set_options", 00:05:56.406 "params": { 00:05:56.406 "impl_name": "posix", 00:05:56.406 "recv_buf_size": 2097152, 00:05:56.406 "send_buf_size": 2097152, 00:05:56.406 "enable_recv_pipe": true, 00:05:56.406 "enable_quickack": false, 00:05:56.406 "enable_placement_id": 0, 00:05:56.406 "enable_zerocopy_send_server": true, 00:05:56.406 "enable_zerocopy_send_client": false, 00:05:56.406 "zerocopy_threshold": 0, 00:05:56.406 "tls_version": 0, 00:05:56.406 "enable_ktls": false 00:05:56.406 } 00:05:56.406 } 00:05:56.406 ] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "vmd", 00:05:56.406 "config": [] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "accel", 00:05:56.406 "config": [ 00:05:56.406 { 00:05:56.406 "method": "accel_set_options", 00:05:56.406 "params": { 00:05:56.406 "small_cache_size": 128, 00:05:56.406 "large_cache_size": 16, 00:05:56.406 "task_count": 2048, 00:05:56.406 "sequence_count": 2048, 00:05:56.406 "buf_count": 2048 00:05:56.406 } 00:05:56.406 } 00:05:56.406 ] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "bdev", 00:05:56.406 "config": [ 00:05:56.406 { 00:05:56.406 "method": "bdev_set_options", 00:05:56.406 "params": { 00:05:56.406 "bdev_io_pool_size": 65535, 00:05:56.406 "bdev_io_cache_size": 256, 00:05:56.406 "bdev_auto_examine": true, 00:05:56.406 "iobuf_small_cache_size": 128, 00:05:56.406 "iobuf_large_cache_size": 16 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "bdev_raid_set_options", 00:05:56.406 "params": { 00:05:56.406 "process_window_size_kb": 1024, 00:05:56.406 "process_max_bandwidth_mb_sec": 0 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "bdev_iscsi_set_options", 00:05:56.406 "params": { 00:05:56.406 "timeout_sec": 30 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "bdev_nvme_set_options", 00:05:56.406 "params": { 00:05:56.406 "action_on_timeout": "none", 00:05:56.406 "timeout_us": 0, 00:05:56.406 "timeout_admin_us": 0, 00:05:56.406 "keep_alive_timeout_ms": 10000, 00:05:56.406 "arbitration_burst": 0, 00:05:56.406 "low_priority_weight": 0, 00:05:56.406 "medium_priority_weight": 0, 00:05:56.406 "high_priority_weight": 0, 00:05:56.406 "nvme_adminq_poll_period_us": 10000, 00:05:56.406 "nvme_ioq_poll_period_us": 0, 00:05:56.406 "io_queue_requests": 0, 00:05:56.406 "delay_cmd_submit": true, 00:05:56.406 "transport_retry_count": 4, 00:05:56.406 "bdev_retry_count": 3, 00:05:56.406 "transport_ack_timeout": 0, 00:05:56.406 "ctrlr_loss_timeout_sec": 0, 00:05:56.406 "reconnect_delay_sec": 0, 00:05:56.406 "fast_io_fail_timeout_sec": 0, 00:05:56.406 "disable_auto_failback": false, 00:05:56.406 "generate_uuids": false, 00:05:56.406 "transport_tos": 0, 00:05:56.406 "nvme_error_stat": false, 00:05:56.406 "rdma_srq_size": 0, 00:05:56.406 "io_path_stat": false, 00:05:56.406 "allow_accel_sequence": false, 00:05:56.406 "rdma_max_cq_size": 0, 00:05:56.406 "rdma_cm_event_timeout_ms": 0, 00:05:56.406 "dhchap_digests": [ 00:05:56.406 "sha256", 00:05:56.406 "sha384", 00:05:56.406 "sha512" 00:05:56.406 ], 00:05:56.406 "dhchap_dhgroups": [ 00:05:56.406 "null", 00:05:56.406 "ffdhe2048", 00:05:56.406 "ffdhe3072", 00:05:56.406 "ffdhe4096", 00:05:56.406 "ffdhe6144", 00:05:56.406 "ffdhe8192" 00:05:56.406 ], 00:05:56.406 "rdma_umr_per_io": false 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "bdev_nvme_set_hotplug", 00:05:56.406 "params": { 00:05:56.406 "period_us": 100000, 00:05:56.406 "enable": false 00:05:56.406 } 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "method": "bdev_wait_for_examine" 00:05:56.406 } 00:05:56.406 ] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "scsi", 00:05:56.406 "config": null 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "scheduler", 00:05:56.406 "config": [ 00:05:56.406 { 00:05:56.406 "method": "framework_set_scheduler", 00:05:56.406 "params": { 00:05:56.406 "name": "static" 00:05:56.406 } 00:05:56.406 } 00:05:56.406 ] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "vhost_scsi", 00:05:56.406 "config": [] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "vhost_blk", 00:05:56.406 "config": [] 00:05:56.406 }, 00:05:56.406 { 00:05:56.406 "subsystem": "ublk", 00:05:56.406 "config": [] 00:05:56.407 }, 00:05:56.407 { 00:05:56.407 "subsystem": "nbd", 00:05:56.407 "config": [] 00:05:56.407 }, 00:05:56.407 { 00:05:56.407 "subsystem": "nvmf", 00:05:56.407 "config": [ 00:05:56.407 { 00:05:56.407 "method": "nvmf_set_config", 00:05:56.407 "params": { 00:05:56.407 "discovery_filter": "match_any", 00:05:56.407 "admin_cmd_passthru": { 00:05:56.407 "identify_ctrlr": false 00:05:56.407 }, 00:05:56.407 "dhchap_digests": [ 00:05:56.407 "sha256", 00:05:56.407 "sha384", 00:05:56.407 "sha512" 00:05:56.407 ], 00:05:56.407 "dhchap_dhgroups": [ 00:05:56.407 "null", 00:05:56.407 "ffdhe2048", 00:05:56.407 "ffdhe3072", 00:05:56.407 "ffdhe4096", 00:05:56.407 "ffdhe6144", 00:05:56.407 "ffdhe8192" 00:05:56.407 ] 00:05:56.407 } 00:05:56.407 }, 00:05:56.407 { 00:05:56.407 "method": "nvmf_set_max_subsystems", 00:05:56.407 "params": { 00:05:56.407 "max_subsystems": 1024 00:05:56.407 } 00:05:56.407 }, 00:05:56.407 { 00:05:56.407 "method": "nvmf_set_crdt", 00:05:56.407 "params": { 00:05:56.407 "crdt1": 0, 00:05:56.407 "crdt2": 0, 00:05:56.407 "crdt3": 0 00:05:56.407 } 00:05:56.407 }, 00:05:56.407 { 00:05:56.407 "method": "nvmf_create_transport", 00:05:56.407 "params": { 00:05:56.407 "trtype": "TCP", 00:05:56.407 "max_queue_depth": 128, 00:05:56.407 "max_io_qpairs_per_ctrlr": 127, 00:05:56.407 "in_capsule_data_size": 4096, 00:05:56.407 "max_io_size": 131072, 00:05:56.407 "io_unit_size": 131072, 00:05:56.407 "max_aq_depth": 128, 00:05:56.407 "num_shared_buffers": 511, 00:05:56.407 "buf_cache_size": 4294967295, 00:05:56.407 "dif_insert_or_strip": false, 00:05:56.407 "zcopy": false, 00:05:56.407 "c2h_success": true, 00:05:56.407 "sock_priority": 0, 00:05:56.407 "abort_timeout_sec": 1, 00:05:56.407 "ack_timeout": 0, 00:05:56.407 "data_wr_pool_size": 0 00:05:56.407 } 00:05:56.407 } 00:05:56.407 ] 00:05:56.407 }, 00:05:56.407 { 00:05:56.407 "subsystem": "iscsi", 00:05:56.407 "config": [ 00:05:56.407 { 00:05:56.407 "method": "iscsi_set_options", 00:05:56.407 "params": { 00:05:56.407 "node_base": "iqn.2016-06.io.spdk", 00:05:56.407 "max_sessions": 128, 00:05:56.407 "max_connections_per_session": 2, 00:05:56.407 "max_queue_depth": 64, 00:05:56.407 "default_time2wait": 2, 00:05:56.407 "default_time2retain": 20, 00:05:56.407 "first_burst_length": 8192, 00:05:56.407 "immediate_data": true, 00:05:56.407 "allow_duplicated_isid": false, 00:05:56.407 "error_recovery_level": 0, 00:05:56.407 "nop_timeout": 60, 00:05:56.407 "nop_in_interval": 30, 00:05:56.407 "disable_chap": false, 00:05:56.407 "require_chap": false, 00:05:56.407 "mutual_chap": false, 00:05:56.407 "chap_group": 0, 00:05:56.407 "max_large_datain_per_connection": 64, 00:05:56.407 "max_r2t_per_connection": 4, 00:05:56.407 "pdu_pool_size": 36864, 00:05:56.407 "immediate_data_pool_size": 16384, 00:05:56.407 "data_out_pool_size": 2048 00:05:56.407 } 00:05:56.407 } 00:05:56.407 ] 00:05:56.407 } 00:05:56.407 ] 00:05:56.407 } 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4123158 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4123158 ']' 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4123158 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4123158 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4123158' 00:05:56.407 killing process with pid 4123158 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4123158 00:05:56.407 05:31:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4123158 00:05:56.977 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4123386 00:05:56.977 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.977 05:31:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4123386 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4123386 ']' 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4123386 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4123386 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4123386' 00:06:02.246 killing process with pid 4123386 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4123386 00:06:02.246 05:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4123386 00:06:02.246 05:31:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.246 05:31:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.246 00:06:02.246 real 0m6.286s 00:06:02.246 user 0m5.974s 00:06:02.246 sys 0m0.618s 00:06:02.246 05:31:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.246 05:31:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.246 ************************************ 00:06:02.247 END TEST skip_rpc_with_json 00:06:02.247 ************************************ 00:06:02.247 05:31:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:02.247 05:31:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.247 05:31:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.247 05:31:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 ************************************ 00:06:02.247 START TEST skip_rpc_with_delay 00:06:02.247 ************************************ 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.247 [2024-12-10 05:31:20.140829] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.247 00:06:02.247 real 0m0.069s 00:06:02.247 user 0m0.037s 00:06:02.247 sys 0m0.032s 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.247 05:31:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 ************************************ 00:06:02.247 END TEST skip_rpc_with_delay 00:06:02.247 ************************************ 00:06:02.247 05:31:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:02.247 05:31:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:02.247 05:31:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:02.247 05:31:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.247 05:31:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.247 05:31:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.506 ************************************ 00:06:02.506 START TEST exit_on_failed_rpc_init 00:06:02.506 ************************************ 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4124348 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4124348 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4124348 ']' 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.506 05:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.506 [2024-12-10 05:31:20.279226] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:02.506 [2024-12-10 05:31:20.279268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124348 ] 00:06:02.506 [2024-12-10 05:31:20.357389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.506 [2024-12-10 05:31:20.396085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.442 [2024-12-10 05:31:21.164996] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:03.442 [2024-12-10 05:31:21.165044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124570 ] 00:06:03.442 [2024-12-10 05:31:21.246120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.442 [2024-12-10 05:31:21.285281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.442 [2024-12-10 05:31:21.285338] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.442 [2024-12-10 05:31:21.285348] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.442 [2024-12-10 05:31:21.285354] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4124348 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4124348 ']' 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4124348 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4124348 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4124348' 00:06:03.442 killing process with pid 4124348 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4124348 00:06:03.442 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4124348 00:06:04.009 00:06:04.009 real 0m1.455s 00:06:04.009 user 0m1.640s 00:06:04.009 sys 0m0.438s 00:06:04.009 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.009 05:31:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.009 ************************************ 00:06:04.009 END TEST exit_on_failed_rpc_init 00:06:04.009 ************************************ 00:06:04.009 05:31:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.009 00:06:04.009 real 0m13.636s 00:06:04.009 user 0m12.993s 00:06:04.009 sys 0m1.645s 00:06:04.009 05:31:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.009 05:31:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.009 ************************************ 00:06:04.009 END TEST skip_rpc 00:06:04.009 ************************************ 00:06:04.009 05:31:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:04.009 05:31:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.009 05:31:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.009 05:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.009 ************************************ 00:06:04.009 START TEST rpc_client 00:06:04.009 ************************************ 00:06:04.010 05:31:21 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:04.010 * Looking for test storage... 00:06:04.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:04.010 05:31:21 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.010 05:31:21 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.010 05:31:21 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.010 05:31:21 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:04.010 05:31:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.269 05:31:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:04.269 05:31:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.269 05:31:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.269 05:31:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.269 05:31:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:04.269 OK 00:06:04.269 05:31:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:04.269 00:06:04.269 real 0m0.198s 00:06:04.269 user 0m0.106s 00:06:04.269 sys 0m0.106s 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.269 05:31:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:04.269 ************************************ 00:06:04.269 END TEST rpc_client 00:06:04.269 ************************************ 00:06:04.269 05:31:22 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.269 05:31:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.269 05:31:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.269 05:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.269 ************************************ 00:06:04.269 START TEST json_config 00:06:04.269 ************************************ 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.269 05:31:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.269 05:31:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.269 05:31:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.269 05:31:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.269 05:31:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.269 05:31:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:04.269 05:31:22 json_config -- scripts/common.sh@345 -- # : 1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.269 05:31:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.269 05:31:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@353 -- # local d=1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.269 05:31:22 json_config -- scripts/common.sh@355 -- # echo 1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.269 05:31:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@353 -- # local d=2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.269 05:31:22 json_config -- scripts/common.sh@355 -- # echo 2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.269 05:31:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.269 05:31:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.269 05:31:22 json_config -- scripts/common.sh@368 -- # return 0 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:22 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.269 --rc genhtml_branch_coverage=1 00:06:04.269 --rc genhtml_function_coverage=1 00:06:04.269 --rc genhtml_legend=1 00:06:04.269 --rc geninfo_all_blocks=1 00:06:04.269 --rc geninfo_unexecuted_blocks=1 00:06:04.269 00:06:04.269 ' 00:06:04.269 05:31:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.269 05:31:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.528 05:31:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.528 05:31:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.528 05:31:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.528 05:31:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.528 05:31:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.528 05:31:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.528 05:31:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.528 05:31:22 json_config -- paths/export.sh@5 -- # export PATH 00:06:04.528 05:31:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@51 -- # : 0 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.528 05:31:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:04.528 05:31:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:04.529 INFO: JSON configuration test init 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.529 05:31:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:04.529 05:31:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.529 05:31:22 json_config -- json_config/common.sh@10 -- # shift 00:06:04.529 05:31:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.529 05:31:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.529 05:31:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.529 05:31:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.529 05:31:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.529 05:31:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4124780 00:06:04.529 05:31:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.529 Waiting for target to run... 00:06:04.529 05:31:22 json_config -- json_config/common.sh@25 -- # waitforlisten 4124780 /var/tmp/spdk_tgt.sock 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 4124780 ']' 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.529 05:31:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.529 05:31:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.529 [2024-12-10 05:31:22.309287] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:04.529 [2024-12-10 05:31:22.309337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124780 ] 00:06:04.787 [2024-12-10 05:31:22.603640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.787 [2024-12-10 05:31:22.636094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.353 05:31:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.353 05:31:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:05.353 05:31:23 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.353 00:06:05.353 05:31:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:05.353 05:31:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:05.353 05:31:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.353 05:31:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.353 05:31:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:05.353 05:31:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:05.353 05:31:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.353 05:31:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.353 05:31:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:05.353 05:31:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:05.353 05:31:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.638 05:31:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.638 05:31:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:08.638 05:31:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:08.638 05:31:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@54 -- # sort 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:08.639 05:31:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.639 05:31:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:08.639 05:31:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.639 05:31:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:08.639 05:31:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.639 05:31:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.897 MallocForNvmf0 00:06:08.897 05:31:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.897 05:31:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.156 MallocForNvmf1 00:06:09.156 05:31:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.156 05:31:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.156 [2024-12-10 05:31:27.061264] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.156 05:31:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.156 05:31:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.414 05:31:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.414 05:31:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.672 05:31:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.672 05:31:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.672 05:31:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.672 05:31:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.930 [2024-12-10 05:31:27.787489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.930 05:31:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:09.930 05:31:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.931 05:31:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.931 05:31:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:09.931 05:31:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.931 05:31:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.931 05:31:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:09.931 05:31:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.931 05:31:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.189 MallocBdevForConfigChangeCheck 00:06:10.189 05:31:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:10.189 05:31:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.189 05:31:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.189 05:31:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:10.189 05:31:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.756 05:31:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:10.756 INFO: shutting down applications... 00:06:10.756 05:31:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:10.756 05:31:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:10.756 05:31:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:10.756 05:31:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:12.131 Calling clear_iscsi_subsystem 00:06:12.131 Calling clear_nvmf_subsystem 00:06:12.131 Calling clear_nbd_subsystem 00:06:12.131 Calling clear_ublk_subsystem 00:06:12.131 Calling clear_vhost_blk_subsystem 00:06:12.131 Calling clear_vhost_scsi_subsystem 00:06:12.131 Calling clear_bdev_subsystem 00:06:12.131 05:31:29 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:12.131 05:31:29 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:12.131 05:31:29 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:12.131 05:31:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.131 05:31:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:12.131 05:31:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:12.697 05:31:30 json_config -- json_config/json_config.sh@352 -- # break 00:06:12.697 05:31:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:12.697 05:31:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:12.697 05:31:30 json_config -- json_config/common.sh@31 -- # local app=target 00:06:12.697 05:31:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.697 05:31:30 json_config -- json_config/common.sh@35 -- # [[ -n 4124780 ]] 00:06:12.697 05:31:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4124780 00:06:12.697 05:31:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.697 05:31:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.697 05:31:30 json_config -- json_config/common.sh@41 -- # kill -0 4124780 00:06:12.697 05:31:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.956 05:31:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.956 05:31:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.956 05:31:30 json_config -- json_config/common.sh@41 -- # kill -0 4124780 00:06:12.956 05:31:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.956 05:31:30 json_config -- json_config/common.sh@43 -- # break 00:06:12.956 05:31:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.956 05:31:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.956 SPDK target shutdown done 00:06:12.956 05:31:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:12.956 INFO: relaunching applications... 00:06:12.956 05:31:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.956 05:31:30 json_config -- json_config/common.sh@9 -- # local app=target 00:06:12.956 05:31:30 json_config -- json_config/common.sh@10 -- # shift 00:06:12.956 05:31:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.956 05:31:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.956 05:31:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.956 05:31:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.956 05:31:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.956 05:31:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4126426 00:06:12.956 05:31:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.956 Waiting for target to run... 00:06:12.956 05:31:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.956 05:31:30 json_config -- json_config/common.sh@25 -- # waitforlisten 4126426 /var/tmp/spdk_tgt.sock 00:06:12.956 05:31:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 4126426 ']' 00:06:12.956 05:31:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.956 05:31:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.956 05:31:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.956 05:31:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.956 05:31:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.214 [2024-12-10 05:31:30.930362] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:13.214 [2024-12-10 05:31:30.930421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126426 ] 00:06:13.473 [2024-12-10 05:31:31.391415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.732 [2024-12-10 05:31:31.447405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.120 [2024-12-10 05:31:34.479109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.120 [2024-12-10 05:31:34.511470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:17.384 05:31:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.385 05:31:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:17.385 05:31:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.385 00:06:17.385 05:31:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:17.385 05:31:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.385 INFO: Checking if target configuration is the same... 00:06:17.385 05:31:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.385 05:31:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:17.385 05:31:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.385 + '[' 2 -ne 2 ']' 00:06:17.385 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.385 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:17.385 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.385 +++ basename /dev/fd/62 00:06:17.385 ++ mktemp /tmp/62.XXX 00:06:17.385 + tmp_file_1=/tmp/62.bPM 00:06:17.385 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.385 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.385 + tmp_file_2=/tmp/spdk_tgt_config.json.lgT 00:06:17.385 + ret=0 00:06:17.385 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.644 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.644 + diff -u /tmp/62.bPM /tmp/spdk_tgt_config.json.lgT 00:06:17.644 + echo 'INFO: JSON config files are the same' 00:06:17.644 INFO: JSON config files are the same 00:06:17.644 + rm /tmp/62.bPM /tmp/spdk_tgt_config.json.lgT 00:06:17.644 + exit 0 00:06:17.644 05:31:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:17.644 05:31:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:17.644 INFO: changing configuration and checking if this can be detected... 00:06:17.644 05:31:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.644 05:31:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.902 05:31:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.902 05:31:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:17.902 05:31:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.902 + '[' 2 -ne 2 ']' 00:06:17.902 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.902 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:17.902 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.902 +++ basename /dev/fd/62 00:06:17.902 ++ mktemp /tmp/62.XXX 00:06:17.902 + tmp_file_1=/tmp/62.2q4 00:06:17.902 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.902 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.902 + tmp_file_2=/tmp/spdk_tgt_config.json.hSg 00:06:17.902 + ret=0 00:06:17.902 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.161 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.161 + diff -u /tmp/62.2q4 /tmp/spdk_tgt_config.json.hSg 00:06:18.161 + ret=1 00:06:18.161 + echo '=== Start of file: /tmp/62.2q4 ===' 00:06:18.161 + cat /tmp/62.2q4 00:06:18.161 + echo '=== End of file: /tmp/62.2q4 ===' 00:06:18.161 + echo '' 00:06:18.161 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hSg ===' 00:06:18.161 + cat /tmp/spdk_tgt_config.json.hSg 00:06:18.161 + echo '=== End of file: /tmp/spdk_tgt_config.json.hSg ===' 00:06:18.161 + echo '' 00:06:18.161 + rm /tmp/62.2q4 /tmp/spdk_tgt_config.json.hSg 00:06:18.161 + exit 1 00:06:18.161 05:31:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:18.161 INFO: configuration change detected. 00:06:18.161 05:31:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 4126426 ]] 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.420 05:31:36 json_config -- json_config/json_config.sh@330 -- # killprocess 4126426 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@954 -- # '[' -z 4126426 ']' 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@958 -- # kill -0 4126426 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@959 -- # uname 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4126426 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4126426' 00:06:18.420 killing process with pid 4126426 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@973 -- # kill 4126426 00:06:18.420 05:31:36 json_config -- common/autotest_common.sh@978 -- # wait 4126426 00:06:19.797 05:31:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.797 05:31:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:19.797 05:31:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.797 05:31:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.797 05:31:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:19.797 05:31:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:19.797 INFO: Success 00:06:19.797 00:06:19.797 real 0m15.662s 00:06:19.797 user 0m16.091s 00:06:19.797 sys 0m2.645s 00:06:19.797 05:31:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.797 05:31:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.797 ************************************ 00:06:19.797 END TEST json_config 00:06:19.797 ************************************ 00:06:20.057 05:31:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.057 05:31:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.057 05:31:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.057 05:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:20.057 ************************************ 00:06:20.057 START TEST json_config_extra_key 00:06:20.057 ************************************ 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.057 05:31:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.057 --rc genhtml_branch_coverage=1 00:06:20.057 --rc genhtml_function_coverage=1 00:06:20.057 --rc genhtml_legend=1 00:06:20.057 --rc geninfo_all_blocks=1 00:06:20.057 --rc geninfo_unexecuted_blocks=1 00:06:20.057 00:06:20.057 ' 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.057 --rc genhtml_branch_coverage=1 00:06:20.057 --rc genhtml_function_coverage=1 00:06:20.057 --rc genhtml_legend=1 00:06:20.057 --rc geninfo_all_blocks=1 00:06:20.057 --rc geninfo_unexecuted_blocks=1 00:06:20.057 00:06:20.057 ' 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.057 --rc genhtml_branch_coverage=1 00:06:20.057 --rc genhtml_function_coverage=1 00:06:20.057 --rc genhtml_legend=1 00:06:20.057 --rc geninfo_all_blocks=1 00:06:20.057 --rc geninfo_unexecuted_blocks=1 00:06:20.057 00:06:20.057 ' 00:06:20.057 05:31:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.057 --rc genhtml_branch_coverage=1 00:06:20.057 --rc genhtml_function_coverage=1 00:06:20.057 --rc genhtml_legend=1 00:06:20.057 --rc geninfo_all_blocks=1 00:06:20.057 --rc geninfo_unexecuted_blocks=1 00:06:20.057 00:06:20.057 ' 00:06:20.057 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:20.057 05:31:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.058 05:31:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.058 05:31:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.058 05:31:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.058 05:31:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.058 05:31:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.058 05:31:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.058 05:31:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.058 05:31:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:20.058 05:31:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.058 05:31:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:20.058 INFO: launching applications... 00:06:20.058 05:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4127700 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.058 Waiting for target to run... 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4127700 /var/tmp/spdk_tgt.sock 00:06:20.058 05:31:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4127700 ']' 00:06:20.058 05:31:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:20.058 05:31:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.058 05:31:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.058 05:31:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.058 05:31:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.058 05:31:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:20.317 [2024-12-10 05:31:38.034444] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:20.317 [2024-12-10 05:31:38.034492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127700 ] 00:06:20.576 [2024-12-10 05:31:38.335730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.576 [2024-12-10 05:31:38.368766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.143 05:31:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.143 05:31:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:21.143 00:06:21.143 05:31:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:21.143 INFO: shutting down applications... 00:06:21.143 05:31:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4127700 ]] 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4127700 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4127700 00:06:21.143 05:31:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.402 05:31:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.402 05:31:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.402 05:31:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4127700 00:06:21.402 05:31:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.402 05:31:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:21.661 05:31:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.661 05:31:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.661 SPDK target shutdown done 00:06:21.661 05:31:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:21.661 Success 00:06:21.661 00:06:21.661 real 0m1.568s 00:06:21.661 user 0m1.314s 00:06:21.661 sys 0m0.414s 00:06:21.661 05:31:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.661 05:31:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.661 ************************************ 00:06:21.661 END TEST json_config_extra_key 00:06:21.661 ************************************ 00:06:21.661 05:31:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.661 05:31:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.661 05:31:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.661 05:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:21.661 ************************************ 00:06:21.661 START TEST alias_rpc 00:06:21.661 ************************************ 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.661 * Looking for test storage... 00:06:21.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.661 05:31:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.661 05:31:39 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.661 --rc genhtml_branch_coverage=1 00:06:21.661 --rc genhtml_function_coverage=1 00:06:21.661 --rc genhtml_legend=1 00:06:21.661 --rc geninfo_all_blocks=1 00:06:21.661 --rc geninfo_unexecuted_blocks=1 00:06:21.661 00:06:21.661 ' 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.662 --rc genhtml_branch_coverage=1 00:06:21.662 --rc genhtml_function_coverage=1 00:06:21.662 --rc genhtml_legend=1 00:06:21.662 --rc geninfo_all_blocks=1 00:06:21.662 --rc geninfo_unexecuted_blocks=1 00:06:21.662 00:06:21.662 ' 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.662 --rc genhtml_branch_coverage=1 00:06:21.662 --rc genhtml_function_coverage=1 00:06:21.662 --rc genhtml_legend=1 00:06:21.662 --rc geninfo_all_blocks=1 00:06:21.662 --rc geninfo_unexecuted_blocks=1 00:06:21.662 00:06:21.662 ' 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.662 --rc genhtml_branch_coverage=1 00:06:21.662 --rc genhtml_function_coverage=1 00:06:21.662 --rc genhtml_legend=1 00:06:21.662 --rc geninfo_all_blocks=1 00:06:21.662 --rc geninfo_unexecuted_blocks=1 00:06:21.662 00:06:21.662 ' 00:06:21.662 05:31:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:21.662 05:31:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4127983 00:06:21.662 05:31:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4127983 00:06:21.662 05:31:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4127983 ']' 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.662 05:31:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.921 [2024-12-10 05:31:39.656350] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:21.921 [2024-12-10 05:31:39.656396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127983 ] 00:06:21.921 [2024-12-10 05:31:39.736339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.921 [2024-12-10 05:31:39.776606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.180 05:31:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.180 05:31:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.180 05:31:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:22.439 05:31:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4127983 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4127983 ']' 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4127983 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4127983 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4127983' 00:06:22.439 killing process with pid 4127983 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 4127983 00:06:22.439 05:31:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 4127983 00:06:22.698 00:06:22.698 real 0m1.140s 00:06:22.698 user 0m1.150s 00:06:22.698 sys 0m0.419s 00:06:22.698 05:31:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.698 05:31:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.698 ************************************ 00:06:22.698 END TEST alias_rpc 00:06:22.698 ************************************ 00:06:22.698 05:31:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:22.698 05:31:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:22.698 05:31:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.698 05:31:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.698 05:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.698 ************************************ 00:06:22.698 START TEST spdkcli_tcp 00:06:22.698 ************************************ 00:06:22.698 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:22.958 * Looking for test storage... 00:06:22.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.958 05:31:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.958 --rc genhtml_branch_coverage=1 00:06:22.958 --rc genhtml_function_coverage=1 00:06:22.958 --rc genhtml_legend=1 00:06:22.958 --rc geninfo_all_blocks=1 00:06:22.958 --rc geninfo_unexecuted_blocks=1 00:06:22.958 00:06:22.958 ' 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.958 --rc genhtml_branch_coverage=1 00:06:22.958 --rc genhtml_function_coverage=1 00:06:22.958 --rc genhtml_legend=1 00:06:22.958 --rc geninfo_all_blocks=1 00:06:22.958 --rc geninfo_unexecuted_blocks=1 00:06:22.958 00:06:22.958 ' 00:06:22.958 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.958 --rc genhtml_branch_coverage=1 00:06:22.958 --rc genhtml_function_coverage=1 00:06:22.958 --rc genhtml_legend=1 00:06:22.958 --rc geninfo_all_blocks=1 00:06:22.958 --rc geninfo_unexecuted_blocks=1 00:06:22.959 00:06:22.959 ' 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.959 --rc genhtml_branch_coverage=1 00:06:22.959 --rc genhtml_function_coverage=1 00:06:22.959 --rc genhtml_legend=1 00:06:22.959 --rc geninfo_all_blocks=1 00:06:22.959 --rc geninfo_unexecuted_blocks=1 00:06:22.959 00:06:22.959 ' 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4128269 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4128269 00:06:22.959 05:31:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4128269 ']' 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.959 05:31:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.959 [2024-12-10 05:31:40.869328] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:22.959 [2024-12-10 05:31:40.869375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128269 ] 00:06:23.218 [2024-12-10 05:31:40.949986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.218 [2024-12-10 05:31:40.991836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.218 [2024-12-10 05:31:40.991838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.785 05:31:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.785 05:31:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:23.785 05:31:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4128497 00:06:23.785 05:31:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:23.786 05:31:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.045 [ 00:06:24.045 "bdev_malloc_delete", 00:06:24.045 "bdev_malloc_create", 00:06:24.045 "bdev_null_resize", 00:06:24.045 "bdev_null_delete", 00:06:24.045 "bdev_null_create", 00:06:24.045 "bdev_nvme_cuse_unregister", 00:06:24.045 "bdev_nvme_cuse_register", 00:06:24.045 "bdev_opal_new_user", 00:06:24.045 "bdev_opal_set_lock_state", 00:06:24.045 "bdev_opal_delete", 00:06:24.045 "bdev_opal_get_info", 00:06:24.045 "bdev_opal_create", 00:06:24.045 "bdev_nvme_opal_revert", 00:06:24.045 "bdev_nvme_opal_init", 00:06:24.045 "bdev_nvme_send_cmd", 00:06:24.045 "bdev_nvme_set_keys", 00:06:24.045 "bdev_nvme_get_path_iostat", 00:06:24.045 "bdev_nvme_get_mdns_discovery_info", 00:06:24.045 "bdev_nvme_stop_mdns_discovery", 00:06:24.045 "bdev_nvme_start_mdns_discovery", 00:06:24.045 "bdev_nvme_set_multipath_policy", 00:06:24.045 "bdev_nvme_set_preferred_path", 00:06:24.045 "bdev_nvme_get_io_paths", 00:06:24.045 "bdev_nvme_remove_error_injection", 00:06:24.045 "bdev_nvme_add_error_injection", 00:06:24.045 "bdev_nvme_get_discovery_info", 00:06:24.045 "bdev_nvme_stop_discovery", 00:06:24.045 "bdev_nvme_start_discovery", 00:06:24.045 "bdev_nvme_get_controller_health_info", 00:06:24.045 "bdev_nvme_disable_controller", 00:06:24.045 "bdev_nvme_enable_controller", 00:06:24.045 "bdev_nvme_reset_controller", 00:06:24.045 "bdev_nvme_get_transport_statistics", 00:06:24.045 "bdev_nvme_apply_firmware", 00:06:24.045 "bdev_nvme_detach_controller", 00:06:24.045 "bdev_nvme_get_controllers", 00:06:24.045 "bdev_nvme_attach_controller", 00:06:24.045 "bdev_nvme_set_hotplug", 00:06:24.045 "bdev_nvme_set_options", 00:06:24.045 "bdev_passthru_delete", 00:06:24.045 "bdev_passthru_create", 00:06:24.045 "bdev_lvol_set_parent_bdev", 00:06:24.045 "bdev_lvol_set_parent", 00:06:24.045 "bdev_lvol_check_shallow_copy", 00:06:24.045 "bdev_lvol_start_shallow_copy", 00:06:24.045 "bdev_lvol_grow_lvstore", 00:06:24.045 "bdev_lvol_get_lvols", 00:06:24.045 "bdev_lvol_get_lvstores", 00:06:24.045 "bdev_lvol_delete", 00:06:24.045 "bdev_lvol_set_read_only", 00:06:24.045 "bdev_lvol_resize", 00:06:24.045 "bdev_lvol_decouple_parent", 00:06:24.045 "bdev_lvol_inflate", 00:06:24.046 "bdev_lvol_rename", 00:06:24.046 "bdev_lvol_clone_bdev", 00:06:24.046 "bdev_lvol_clone", 00:06:24.046 "bdev_lvol_snapshot", 00:06:24.046 "bdev_lvol_create", 00:06:24.046 "bdev_lvol_delete_lvstore", 00:06:24.046 "bdev_lvol_rename_lvstore", 00:06:24.046 "bdev_lvol_create_lvstore", 00:06:24.046 "bdev_raid_set_options", 00:06:24.046 "bdev_raid_remove_base_bdev", 00:06:24.046 "bdev_raid_add_base_bdev", 00:06:24.046 "bdev_raid_delete", 00:06:24.046 "bdev_raid_create", 00:06:24.046 "bdev_raid_get_bdevs", 00:06:24.046 "bdev_error_inject_error", 00:06:24.046 "bdev_error_delete", 00:06:24.046 "bdev_error_create", 00:06:24.046 "bdev_split_delete", 00:06:24.046 "bdev_split_create", 00:06:24.046 "bdev_delay_delete", 00:06:24.046 "bdev_delay_create", 00:06:24.046 "bdev_delay_update_latency", 00:06:24.046 "bdev_zone_block_delete", 00:06:24.046 "bdev_zone_block_create", 00:06:24.046 "blobfs_create", 00:06:24.046 "blobfs_detect", 00:06:24.046 "blobfs_set_cache_size", 00:06:24.046 "bdev_aio_delete", 00:06:24.046 "bdev_aio_rescan", 00:06:24.046 "bdev_aio_create", 00:06:24.046 "bdev_ftl_set_property", 00:06:24.046 "bdev_ftl_get_properties", 00:06:24.046 "bdev_ftl_get_stats", 00:06:24.046 "bdev_ftl_unmap", 00:06:24.046 "bdev_ftl_unload", 00:06:24.046 "bdev_ftl_delete", 00:06:24.046 "bdev_ftl_load", 00:06:24.046 "bdev_ftl_create", 00:06:24.046 "bdev_virtio_attach_controller", 00:06:24.046 "bdev_virtio_scsi_get_devices", 00:06:24.046 "bdev_virtio_detach_controller", 00:06:24.046 "bdev_virtio_blk_set_hotplug", 00:06:24.046 "bdev_iscsi_delete", 00:06:24.046 "bdev_iscsi_create", 00:06:24.046 "bdev_iscsi_set_options", 00:06:24.046 "accel_error_inject_error", 00:06:24.046 "ioat_scan_accel_module", 00:06:24.046 "dsa_scan_accel_module", 00:06:24.046 "iaa_scan_accel_module", 00:06:24.046 "vfu_virtio_create_fs_endpoint", 00:06:24.046 "vfu_virtio_create_scsi_endpoint", 00:06:24.046 "vfu_virtio_scsi_remove_target", 00:06:24.046 "vfu_virtio_scsi_add_target", 00:06:24.046 "vfu_virtio_create_blk_endpoint", 00:06:24.046 "vfu_virtio_delete_endpoint", 00:06:24.046 "keyring_file_remove_key", 00:06:24.046 "keyring_file_add_key", 00:06:24.046 "keyring_linux_set_options", 00:06:24.046 "fsdev_aio_delete", 00:06:24.046 "fsdev_aio_create", 00:06:24.046 "iscsi_get_histogram", 00:06:24.046 "iscsi_enable_histogram", 00:06:24.046 "iscsi_set_options", 00:06:24.046 "iscsi_get_auth_groups", 00:06:24.046 "iscsi_auth_group_remove_secret", 00:06:24.046 "iscsi_auth_group_add_secret", 00:06:24.046 "iscsi_delete_auth_group", 00:06:24.046 "iscsi_create_auth_group", 00:06:24.046 "iscsi_set_discovery_auth", 00:06:24.046 "iscsi_get_options", 00:06:24.046 "iscsi_target_node_request_logout", 00:06:24.046 "iscsi_target_node_set_redirect", 00:06:24.046 "iscsi_target_node_set_auth", 00:06:24.046 "iscsi_target_node_add_lun", 00:06:24.046 "iscsi_get_stats", 00:06:24.046 "iscsi_get_connections", 00:06:24.046 "iscsi_portal_group_set_auth", 00:06:24.046 "iscsi_start_portal_group", 00:06:24.046 "iscsi_delete_portal_group", 00:06:24.046 "iscsi_create_portal_group", 00:06:24.046 "iscsi_get_portal_groups", 00:06:24.046 "iscsi_delete_target_node", 00:06:24.046 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.046 "iscsi_target_node_add_pg_ig_maps", 00:06:24.046 "iscsi_create_target_node", 00:06:24.046 "iscsi_get_target_nodes", 00:06:24.046 "iscsi_delete_initiator_group", 00:06:24.046 "iscsi_initiator_group_remove_initiators", 00:06:24.046 "iscsi_initiator_group_add_initiators", 00:06:24.046 "iscsi_create_initiator_group", 00:06:24.046 "iscsi_get_initiator_groups", 00:06:24.046 "nvmf_set_crdt", 00:06:24.046 "nvmf_set_config", 00:06:24.046 "nvmf_set_max_subsystems", 00:06:24.046 "nvmf_stop_mdns_prr", 00:06:24.046 "nvmf_publish_mdns_prr", 00:06:24.046 "nvmf_subsystem_get_listeners", 00:06:24.046 "nvmf_subsystem_get_qpairs", 00:06:24.046 "nvmf_subsystem_get_controllers", 00:06:24.046 "nvmf_get_stats", 00:06:24.046 "nvmf_get_transports", 00:06:24.046 "nvmf_create_transport", 00:06:24.046 "nvmf_get_targets", 00:06:24.046 "nvmf_delete_target", 00:06:24.046 "nvmf_create_target", 00:06:24.046 "nvmf_subsystem_allow_any_host", 00:06:24.046 "nvmf_subsystem_set_keys", 00:06:24.046 "nvmf_subsystem_remove_host", 00:06:24.046 "nvmf_subsystem_add_host", 00:06:24.046 "nvmf_ns_remove_host", 00:06:24.046 "nvmf_ns_add_host", 00:06:24.046 "nvmf_subsystem_remove_ns", 00:06:24.046 "nvmf_subsystem_set_ns_ana_group", 00:06:24.046 "nvmf_subsystem_add_ns", 00:06:24.046 "nvmf_subsystem_listener_set_ana_state", 00:06:24.046 "nvmf_discovery_get_referrals", 00:06:24.046 "nvmf_discovery_remove_referral", 00:06:24.046 "nvmf_discovery_add_referral", 00:06:24.046 "nvmf_subsystem_remove_listener", 00:06:24.046 "nvmf_subsystem_add_listener", 00:06:24.046 "nvmf_delete_subsystem", 00:06:24.046 "nvmf_create_subsystem", 00:06:24.046 "nvmf_get_subsystems", 00:06:24.046 "env_dpdk_get_mem_stats", 00:06:24.046 "nbd_get_disks", 00:06:24.046 "nbd_stop_disk", 00:06:24.046 "nbd_start_disk", 00:06:24.046 "ublk_recover_disk", 00:06:24.046 "ublk_get_disks", 00:06:24.046 "ublk_stop_disk", 00:06:24.046 "ublk_start_disk", 00:06:24.046 "ublk_destroy_target", 00:06:24.046 "ublk_create_target", 00:06:24.046 "virtio_blk_create_transport", 00:06:24.046 "virtio_blk_get_transports", 00:06:24.046 "vhost_controller_set_coalescing", 00:06:24.046 "vhost_get_controllers", 00:06:24.046 "vhost_delete_controller", 00:06:24.046 "vhost_create_blk_controller", 00:06:24.046 "vhost_scsi_controller_remove_target", 00:06:24.046 "vhost_scsi_controller_add_target", 00:06:24.046 "vhost_start_scsi_controller", 00:06:24.046 "vhost_create_scsi_controller", 00:06:24.046 "thread_set_cpumask", 00:06:24.046 "scheduler_set_options", 00:06:24.046 "framework_get_governor", 00:06:24.046 "framework_get_scheduler", 00:06:24.046 "framework_set_scheduler", 00:06:24.046 "framework_get_reactors", 00:06:24.046 "thread_get_io_channels", 00:06:24.046 "thread_get_pollers", 00:06:24.046 "thread_get_stats", 00:06:24.046 "framework_monitor_context_switch", 00:06:24.046 "spdk_kill_instance", 00:06:24.046 "log_enable_timestamps", 00:06:24.046 "log_get_flags", 00:06:24.046 "log_clear_flag", 00:06:24.046 "log_set_flag", 00:06:24.046 "log_get_level", 00:06:24.046 "log_set_level", 00:06:24.046 "log_get_print_level", 00:06:24.046 "log_set_print_level", 00:06:24.046 "framework_enable_cpumask_locks", 00:06:24.046 "framework_disable_cpumask_locks", 00:06:24.046 "framework_wait_init", 00:06:24.046 "framework_start_init", 00:06:24.046 "scsi_get_devices", 00:06:24.046 "bdev_get_histogram", 00:06:24.046 "bdev_enable_histogram", 00:06:24.046 "bdev_set_qos_limit", 00:06:24.046 "bdev_set_qd_sampling_period", 00:06:24.046 "bdev_get_bdevs", 00:06:24.046 "bdev_reset_iostat", 00:06:24.046 "bdev_get_iostat", 00:06:24.046 "bdev_examine", 00:06:24.046 "bdev_wait_for_examine", 00:06:24.046 "bdev_set_options", 00:06:24.046 "accel_get_stats", 00:06:24.046 "accel_set_options", 00:06:24.046 "accel_set_driver", 00:06:24.046 "accel_crypto_key_destroy", 00:06:24.046 "accel_crypto_keys_get", 00:06:24.046 "accel_crypto_key_create", 00:06:24.046 "accel_assign_opc", 00:06:24.046 "accel_get_module_info", 00:06:24.046 "accel_get_opc_assignments", 00:06:24.046 "vmd_rescan", 00:06:24.046 "vmd_remove_device", 00:06:24.046 "vmd_enable", 00:06:24.046 "sock_get_default_impl", 00:06:24.046 "sock_set_default_impl", 00:06:24.046 "sock_impl_set_options", 00:06:24.046 "sock_impl_get_options", 00:06:24.046 "iobuf_get_stats", 00:06:24.046 "iobuf_set_options", 00:06:24.046 "keyring_get_keys", 00:06:24.046 "vfu_tgt_set_base_path", 00:06:24.046 "framework_get_pci_devices", 00:06:24.046 "framework_get_config", 00:06:24.046 "framework_get_subsystems", 00:06:24.046 "fsdev_set_opts", 00:06:24.046 "fsdev_get_opts", 00:06:24.046 "trace_get_info", 00:06:24.046 "trace_get_tpoint_group_mask", 00:06:24.046 "trace_disable_tpoint_group", 00:06:24.046 "trace_enable_tpoint_group", 00:06:24.046 "trace_clear_tpoint_mask", 00:06:24.046 "trace_set_tpoint_mask", 00:06:24.046 "notify_get_notifications", 00:06:24.046 "notify_get_types", 00:06:24.046 "spdk_get_version", 00:06:24.046 "rpc_get_methods" 00:06:24.046 ] 00:06:24.046 05:31:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.046 05:31:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.046 05:31:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4128269 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4128269 ']' 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4128269 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4128269 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4128269' 00:06:24.046 killing process with pid 4128269 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4128269 00:06:24.046 05:31:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4128269 00:06:24.306 00:06:24.306 real 0m1.610s 00:06:24.306 user 0m2.976s 00:06:24.306 sys 0m0.473s 00:06:24.306 05:31:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.306 05:31:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.306 ************************************ 00:06:24.306 END TEST spdkcli_tcp 00:06:24.306 ************************************ 00:06:24.565 05:31:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.565 05:31:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.565 05:31:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.565 05:31:42 -- common/autotest_common.sh@10 -- # set +x 00:06:24.565 ************************************ 00:06:24.565 START TEST dpdk_mem_utility 00:06:24.565 ************************************ 00:06:24.565 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.565 * Looking for test storage... 00:06:24.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:24.565 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.565 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.565 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.565 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:24.565 05:31:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:24.566 05:31:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.566 05:31:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:24.566 05:31:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.566 05:31:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.566 05:31:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.566 05:31:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.566 --rc genhtml_branch_coverage=1 00:06:24.566 --rc genhtml_function_coverage=1 00:06:24.566 --rc genhtml_legend=1 00:06:24.566 --rc geninfo_all_blocks=1 00:06:24.566 --rc geninfo_unexecuted_blocks=1 00:06:24.566 00:06:24.566 ' 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.566 --rc genhtml_branch_coverage=1 00:06:24.566 --rc genhtml_function_coverage=1 00:06:24.566 --rc genhtml_legend=1 00:06:24.566 --rc geninfo_all_blocks=1 00:06:24.566 --rc geninfo_unexecuted_blocks=1 00:06:24.566 00:06:24.566 ' 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.566 --rc genhtml_branch_coverage=1 00:06:24.566 --rc genhtml_function_coverage=1 00:06:24.566 --rc genhtml_legend=1 00:06:24.566 --rc geninfo_all_blocks=1 00:06:24.566 --rc geninfo_unexecuted_blocks=1 00:06:24.566 00:06:24.566 ' 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.566 --rc genhtml_branch_coverage=1 00:06:24.566 --rc genhtml_function_coverage=1 00:06:24.566 --rc genhtml_legend=1 00:06:24.566 --rc geninfo_all_blocks=1 00:06:24.566 --rc geninfo_unexecuted_blocks=1 00:06:24.566 00:06:24.566 ' 00:06:24.566 05:31:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:24.566 05:31:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4128580 00:06:24.566 05:31:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4128580 00:06:24.566 05:31:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4128580 ']' 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.566 05:31:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.825 [2024-12-10 05:31:42.547549] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:24.825 [2024-12-10 05:31:42.547597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128580 ] 00:06:24.825 [2024-12-10 05:31:42.627257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.825 [2024-12-10 05:31:42.665987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.762 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.762 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:25.762 05:31:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:25.762 05:31:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:25.762 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.762 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.762 { 00:06:25.762 "filename": "/tmp/spdk_mem_dump.txt" 00:06:25.762 } 00:06:25.762 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.762 05:31:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.762 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:25.762 1 heaps totaling size 818.000000 MiB 00:06:25.762 size: 818.000000 MiB heap id: 0 00:06:25.762 end heaps---------- 00:06:25.762 9 mempools totaling size 603.782043 MiB 00:06:25.762 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.762 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.762 size: 100.555481 MiB name: bdev_io_4128580 00:06:25.762 size: 50.003479 MiB name: msgpool_4128580 00:06:25.762 size: 36.509338 MiB name: fsdev_io_4128580 00:06:25.762 size: 21.763794 MiB name: PDU_Pool 00:06:25.762 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.762 size: 4.133484 MiB name: evtpool_4128580 00:06:25.762 size: 0.026123 MiB name: Session_Pool 00:06:25.762 end mempools------- 00:06:25.762 6 memzones totaling size 4.142822 MiB 00:06:25.762 size: 1.000366 MiB name: RG_ring_0_4128580 00:06:25.762 size: 1.000366 MiB name: RG_ring_1_4128580 00:06:25.762 size: 1.000366 MiB name: RG_ring_4_4128580 00:06:25.762 size: 1.000366 MiB name: RG_ring_5_4128580 00:06:25.762 size: 0.125366 MiB name: RG_ring_2_4128580 00:06:25.762 size: 0.015991 MiB name: RG_ring_3_4128580 00:06:25.762 end memzones------- 00:06:25.762 05:31:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.762 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:25.762 list of free elements. size: 10.852478 MiB 00:06:25.762 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:25.762 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:25.762 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:25.762 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:25.762 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:25.762 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:25.762 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:25.762 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:25.762 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:25.762 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:25.762 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:25.762 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:25.762 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:25.762 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:25.762 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:25.762 list of standard malloc elements. size: 199.218628 MiB 00:06:25.762 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:25.762 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:25.762 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:25.762 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:25.762 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:25.762 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.762 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:25.762 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.762 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:25.762 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:25.762 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:25.762 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:25.762 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:25.763 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:25.763 list of memzone associated elements. size: 607.928894 MiB 00:06:25.763 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:25.763 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.763 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:25.763 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.763 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:25.763 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_4128580_0 00:06:25.763 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:25.763 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4128580_0 00:06:25.763 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:25.763 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4128580_0 00:06:25.763 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:25.763 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.763 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:25.763 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.763 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:25.763 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4128580_0 00:06:25.763 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:25.763 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4128580 00:06:25.763 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:25.763 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4128580 00:06:25.763 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:25.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.763 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:25.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.763 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:25.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.763 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:25.763 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.763 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:25.763 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4128580 00:06:25.763 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:25.763 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4128580 00:06:25.763 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:25.763 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4128580 00:06:25.763 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:25.763 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4128580 00:06:25.763 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:25.763 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4128580 00:06:25.763 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:25.763 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4128580 00:06:25.763 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:25.763 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.763 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:25.763 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.763 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:25.763 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.763 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:25.763 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4128580 00:06:25.763 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:25.763 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4128580 00:06:25.763 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:25.763 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.763 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:25.763 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.763 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:25.763 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4128580 00:06:25.763 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:25.763 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.763 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:25.763 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4128580 00:06:25.763 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:25.763 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4128580 00:06:25.763 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:25.763 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4128580 00:06:25.763 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:25.763 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.763 05:31:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.763 05:31:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4128580 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4128580 ']' 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4128580 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4128580 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4128580' 00:06:25.763 killing process with pid 4128580 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4128580 00:06:25.763 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4128580 00:06:26.023 00:06:26.023 real 0m1.521s 00:06:26.023 user 0m1.596s 00:06:26.023 sys 0m0.448s 00:06:26.023 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.023 05:31:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 ************************************ 00:06:26.023 END TEST dpdk_mem_utility 00:06:26.023 ************************************ 00:06:26.023 05:31:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.023 05:31:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.023 05:31:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.023 05:31:43 -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 ************************************ 00:06:26.023 START TEST event 00:06:26.023 ************************************ 00:06:26.023 05:31:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.281 * Looking for test storage... 00:06:26.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.281 05:31:43 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.281 05:31:44 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.281 05:31:44 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.281 05:31:44 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.281 05:31:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.281 05:31:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.281 05:31:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.281 05:31:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.281 05:31:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.281 05:31:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.282 05:31:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.282 05:31:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.282 05:31:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.282 05:31:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.282 05:31:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.282 05:31:44 event -- scripts/common.sh@344 -- # case "$op" in 00:06:26.282 05:31:44 event -- scripts/common.sh@345 -- # : 1 00:06:26.282 05:31:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.282 05:31:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.282 05:31:44 event -- scripts/common.sh@365 -- # decimal 1 00:06:26.282 05:31:44 event -- scripts/common.sh@353 -- # local d=1 00:06:26.282 05:31:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.282 05:31:44 event -- scripts/common.sh@355 -- # echo 1 00:06:26.282 05:31:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.282 05:31:44 event -- scripts/common.sh@366 -- # decimal 2 00:06:26.282 05:31:44 event -- scripts/common.sh@353 -- # local d=2 00:06:26.282 05:31:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.282 05:31:44 event -- scripts/common.sh@355 -- # echo 2 00:06:26.282 05:31:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.282 05:31:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.282 05:31:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.282 05:31:44 event -- scripts/common.sh@368 -- # return 0 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.282 --rc genhtml_branch_coverage=1 00:06:26.282 --rc genhtml_function_coverage=1 00:06:26.282 --rc genhtml_legend=1 00:06:26.282 --rc geninfo_all_blocks=1 00:06:26.282 --rc geninfo_unexecuted_blocks=1 00:06:26.282 00:06:26.282 ' 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.282 --rc genhtml_branch_coverage=1 00:06:26.282 --rc genhtml_function_coverage=1 00:06:26.282 --rc genhtml_legend=1 00:06:26.282 --rc geninfo_all_blocks=1 00:06:26.282 --rc geninfo_unexecuted_blocks=1 00:06:26.282 00:06:26.282 ' 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.282 --rc genhtml_branch_coverage=1 00:06:26.282 --rc genhtml_function_coverage=1 00:06:26.282 --rc genhtml_legend=1 00:06:26.282 --rc geninfo_all_blocks=1 00:06:26.282 --rc geninfo_unexecuted_blocks=1 00:06:26.282 00:06:26.282 ' 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.282 --rc genhtml_branch_coverage=1 00:06:26.282 --rc genhtml_function_coverage=1 00:06:26.282 --rc genhtml_legend=1 00:06:26.282 --rc geninfo_all_blocks=1 00:06:26.282 --rc geninfo_unexecuted_blocks=1 00:06:26.282 00:06:26.282 ' 00:06:26.282 05:31:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:26.282 05:31:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.282 05:31:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:26.282 05:31:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.282 05:31:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.282 ************************************ 00:06:26.282 START TEST event_perf 00:06:26.282 ************************************ 00:06:26.282 05:31:44 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.282 Running I/O for 1 seconds...[2024-12-10 05:31:44.143180] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:26.282 [2024-12-10 05:31:44.143254] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128920 ] 00:06:26.282 [2024-12-10 05:31:44.227843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.541 [2024-12-10 05:31:44.270483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.541 [2024-12-10 05:31:44.270592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.541 [2024-12-10 05:31:44.270741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.541 Running I/O for 1 seconds...[2024-12-10 05:31:44.270742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.477 00:06:27.477 lcore 0: 205117 00:06:27.477 lcore 1: 205117 00:06:27.477 lcore 2: 205116 00:06:27.477 lcore 3: 205115 00:06:27.477 done. 00:06:27.477 00:06:27.477 real 0m1.188s 00:06:27.477 user 0m4.100s 00:06:27.477 sys 0m0.085s 00:06:27.477 05:31:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.477 05:31:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.477 ************************************ 00:06:27.477 END TEST event_perf 00:06:27.477 ************************************ 00:06:27.477 05:31:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.477 05:31:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:27.477 05:31:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.477 05:31:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.477 ************************************ 00:06:27.477 START TEST event_reactor 00:06:27.477 ************************************ 00:06:27.477 05:31:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.477 [2024-12-10 05:31:45.396818] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:27.477 [2024-12-10 05:31:45.396876] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129128 ] 00:06:27.735 [2024-12-10 05:31:45.480400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.735 [2024-12-10 05:31:45.519025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.672 test_start 00:06:28.672 oneshot 00:06:28.672 tick 100 00:06:28.672 tick 100 00:06:28.672 tick 250 00:06:28.672 tick 100 00:06:28.672 tick 100 00:06:28.672 tick 100 00:06:28.672 tick 250 00:06:28.672 tick 500 00:06:28.672 tick 100 00:06:28.672 tick 100 00:06:28.672 tick 250 00:06:28.672 tick 100 00:06:28.672 tick 100 00:06:28.672 test_end 00:06:28.672 00:06:28.672 real 0m1.176s 00:06:28.672 user 0m1.099s 00:06:28.672 sys 0m0.072s 00:06:28.672 05:31:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.672 05:31:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:28.672 ************************************ 00:06:28.672 END TEST event_reactor 00:06:28.672 ************************************ 00:06:28.672 05:31:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.672 05:31:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:28.672 05:31:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.672 05:31:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.672 ************************************ 00:06:28.672 START TEST event_reactor_perf 00:06:28.672 ************************************ 00:06:28.672 05:31:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.931 [2024-12-10 05:31:46.644770] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:28.931 [2024-12-10 05:31:46.644843] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129373 ] 00:06:28.931 [2024-12-10 05:31:46.724583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.931 [2024-12-10 05:31:46.762425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.868 test_start 00:06:29.868 test_end 00:06:29.868 Performance: 523925 events per second 00:06:29.868 00:06:29.868 real 0m1.173s 00:06:29.868 user 0m1.089s 00:06:29.868 sys 0m0.080s 00:06:29.868 05:31:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.868 05:31:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.868 ************************************ 00:06:29.868 END TEST event_reactor_perf 00:06:29.868 ************************************ 00:06:30.127 05:31:47 event -- event/event.sh@49 -- # uname -s 00:06:30.127 05:31:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.127 05:31:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.127 05:31:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.127 05:31:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.127 05:31:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.127 ************************************ 00:06:30.127 START TEST event_scheduler 00:06:30.127 ************************************ 00:06:30.127 05:31:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.127 * Looking for test storage... 00:06:30.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:30.127 05:31:47 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.127 05:31:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.127 05:31:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.127 05:31:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:30.127 05:31:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.128 05:31:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.128 --rc genhtml_branch_coverage=1 00:06:30.128 --rc genhtml_function_coverage=1 00:06:30.128 --rc genhtml_legend=1 00:06:30.128 --rc geninfo_all_blocks=1 00:06:30.128 --rc geninfo_unexecuted_blocks=1 00:06:30.128 00:06:30.128 ' 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.128 --rc genhtml_branch_coverage=1 00:06:30.128 --rc genhtml_function_coverage=1 00:06:30.128 --rc genhtml_legend=1 00:06:30.128 --rc geninfo_all_blocks=1 00:06:30.128 --rc geninfo_unexecuted_blocks=1 00:06:30.128 00:06:30.128 ' 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.128 --rc genhtml_branch_coverage=1 00:06:30.128 --rc genhtml_function_coverage=1 00:06:30.128 --rc genhtml_legend=1 00:06:30.128 --rc geninfo_all_blocks=1 00:06:30.128 --rc geninfo_unexecuted_blocks=1 00:06:30.128 00:06:30.128 ' 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.128 --rc genhtml_branch_coverage=1 00:06:30.128 --rc genhtml_function_coverage=1 00:06:30.128 --rc genhtml_legend=1 00:06:30.128 --rc geninfo_all_blocks=1 00:06:30.128 --rc geninfo_unexecuted_blocks=1 00:06:30.128 00:06:30.128 ' 00:06:30.128 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.128 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4129653 00:06:30.128 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.128 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.128 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4129653 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4129653 ']' 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.128 05:31:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.387 [2024-12-10 05:31:48.091773] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:30.387 [2024-12-10 05:31:48.091820] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129653 ] 00:06:30.387 [2024-12-10 05:31:48.170053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.387 [2024-12-10 05:31:48.214547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.387 [2024-12-10 05:31:48.214653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.387 [2024-12-10 05:31:48.214761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.387 [2024-12-10 05:31:48.214762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:30.387 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.387 [2024-12-10 05:31:48.263342] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:30.387 [2024-12-10 05:31:48.263361] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.387 [2024-12-10 05:31:48.263370] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.387 [2024-12-10 05:31:48.263375] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.387 [2024-12-10 05:31:48.263380] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.387 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.387 05:31:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 [2024-12-10 05:31:48.341923] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.646 05:31:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.646 05:31:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.646 05:31:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 ************************************ 00:06:30.646 START TEST scheduler_create_thread 00:06:30.646 ************************************ 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 2 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 3 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 4 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 5 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 6 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 7 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 8 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 9 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 10 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.646 05:31:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.022 05:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.022 05:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.022 05:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.022 05:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.022 05:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.398 05:31:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.398 00:06:33.398 real 0m2.620s 00:06:33.398 user 0m0.023s 00:06:33.398 sys 0m0.006s 00:06:33.398 05:31:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.398 05:31:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.398 ************************************ 00:06:33.398 END TEST scheduler_create_thread 00:06:33.398 ************************************ 00:06:33.398 05:31:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.398 05:31:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4129653 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4129653 ']' 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4129653 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4129653 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4129653' 00:06:33.398 killing process with pid 4129653 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4129653 00:06:33.398 05:31:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4129653 00:06:33.657 [2024-12-10 05:31:51.480050] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:33.916 00:06:33.916 real 0m3.775s 00:06:33.916 user 0m5.662s 00:06:33.916 sys 0m0.374s 00:06:33.916 05:31:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.916 05:31:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.916 ************************************ 00:06:33.916 END TEST event_scheduler 00:06:33.916 ************************************ 00:06:33.916 05:31:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:33.916 05:31:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:33.916 05:31:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.916 05:31:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.916 05:31:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.916 ************************************ 00:06:33.916 START TEST app_repeat 00:06:33.916 ************************************ 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4130384 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4130384' 00:06:33.916 Process app_repeat pid: 4130384 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:33.916 spdk_app_start Round 0 00:06:33.916 05:31:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4130384 /var/tmp/spdk-nbd.sock 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4130384 ']' 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.916 05:31:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.916 [2024-12-10 05:31:51.762137] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:33.916 [2024-12-10 05:31:51.762188] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130384 ] 00:06:33.916 [2024-12-10 05:31:51.845313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.174 [2024-12-10 05:31:51.887779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.174 [2024-12-10 05:31:51.887781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.174 05:31:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.174 05:31:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.175 05:31:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.433 Malloc0 00:06:34.433 05:31:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.433 Malloc1 00:06:34.692 05:31:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.692 05:31:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.693 05:31:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.693 /dev/nbd0 00:06:34.693 05:31:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.693 05:31:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.693 1+0 records in 00:06:34.693 1+0 records out 00:06:34.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023689 s, 17.3 MB/s 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.693 05:31:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.952 /dev/nbd1 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.952 1+0 records in 00:06:34.952 1+0 records out 00:06:34.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215715 s, 19.0 MB/s 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.952 05:31:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.952 05:31:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.211 { 00:06:35.211 "nbd_device": "/dev/nbd0", 00:06:35.211 "bdev_name": "Malloc0" 00:06:35.211 }, 00:06:35.211 { 00:06:35.211 "nbd_device": "/dev/nbd1", 00:06:35.211 "bdev_name": "Malloc1" 00:06:35.211 } 00:06:35.211 ]' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.211 { 00:06:35.211 "nbd_device": "/dev/nbd0", 00:06:35.211 "bdev_name": "Malloc0" 00:06:35.211 }, 00:06:35.211 { 00:06:35.211 "nbd_device": "/dev/nbd1", 00:06:35.211 "bdev_name": "Malloc1" 00:06:35.211 } 00:06:35.211 ]' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.211 /dev/nbd1' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.211 /dev/nbd1' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.211 256+0 records in 00:06:35.211 256+0 records out 00:06:35.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00335012 s, 313 MB/s 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.211 256+0 records in 00:06:35.211 256+0 records out 00:06:35.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140944 s, 74.4 MB/s 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.211 05:31:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.211 256+0 records in 00:06:35.211 256+0 records out 00:06:35.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145761 s, 71.9 MB/s 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.470 05:31:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.729 05:31:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.988 05:31:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.988 05:31:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.247 05:31:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:36.506 [2024-12-10 05:31:54.238067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.506 [2024-12-10 05:31:54.273971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.506 [2024-12-10 05:31:54.273972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.506 [2024-12-10 05:31:54.314255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.506 [2024-12-10 05:31:54.314295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.794 05:31:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.794 05:31:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:39.794 spdk_app_start Round 1 00:06:39.794 05:31:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4130384 /var/tmp/spdk-nbd.sock 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4130384 ']' 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.794 05:31:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:39.794 05:31:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.794 Malloc0 00:06:39.794 05:31:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.794 Malloc1 00:06:39.794 05:31:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.794 05:31:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.053 /dev/nbd0 00:06:40.053 05:31:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.053 05:31:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.053 1+0 records in 00:06:40.053 1+0 records out 00:06:40.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199438 s, 20.5 MB/s 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.053 05:31:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.053 05:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.053 05:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.053 05:31:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.312 /dev/nbd1 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.312 1+0 records in 00:06:40.312 1+0 records out 00:06:40.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197816 s, 20.7 MB/s 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.312 05:31:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.312 05:31:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.571 05:31:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.571 { 00:06:40.571 "nbd_device": "/dev/nbd0", 00:06:40.571 "bdev_name": "Malloc0" 00:06:40.571 }, 00:06:40.571 { 00:06:40.571 "nbd_device": "/dev/nbd1", 00:06:40.571 "bdev_name": "Malloc1" 00:06:40.571 } 00:06:40.571 ]' 00:06:40.571 05:31:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.571 { 00:06:40.571 "nbd_device": "/dev/nbd0", 00:06:40.571 "bdev_name": "Malloc0" 00:06:40.571 }, 00:06:40.571 { 00:06:40.571 "nbd_device": "/dev/nbd1", 00:06:40.571 "bdev_name": "Malloc1" 00:06:40.571 } 00:06:40.572 ]' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.572 /dev/nbd1' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.572 /dev/nbd1' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.572 256+0 records in 00:06:40.572 256+0 records out 00:06:40.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106296 s, 98.6 MB/s 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.572 256+0 records in 00:06:40.572 256+0 records out 00:06:40.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013905 s, 75.4 MB/s 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.572 256+0 records in 00:06:40.572 256+0 records out 00:06:40.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148655 s, 70.5 MB/s 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.572 05:31:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.831 05:31:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.089 05:31:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.090 05:31:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.090 05:31:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.090 05:31:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.090 05:31:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.348 05:31:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.348 05:31:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.607 05:31:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.866 [2024-12-10 05:31:59.597694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.866 [2024-12-10 05:31:59.633679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.866 [2024-12-10 05:31:59.633680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.866 [2024-12-10 05:31:59.674614] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.866 [2024-12-10 05:31:59.674653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.154 05:32:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.154 05:32:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:45.154 spdk_app_start Round 2 00:06:45.154 05:32:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4130384 /var/tmp/spdk-nbd.sock 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4130384 ']' 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.154 05:32:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:45.154 05:32:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.154 Malloc0 00:06:45.154 05:32:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.154 Malloc1 00:06:45.154 05:32:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.154 05:32:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.413 /dev/nbd0 00:06:45.413 05:32:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.413 05:32:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.413 1+0 records in 00:06:45.413 1+0 records out 00:06:45.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177681 s, 23.1 MB/s 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.413 05:32:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.413 05:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.413 05:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.413 05:32:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.672 /dev/nbd1 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.672 1+0 records in 00:06:45.672 1+0 records out 00:06:45.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217286 s, 18.9 MB/s 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.672 05:32:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.672 05:32:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.931 { 00:06:45.931 "nbd_device": "/dev/nbd0", 00:06:45.931 "bdev_name": "Malloc0" 00:06:45.931 }, 00:06:45.931 { 00:06:45.931 "nbd_device": "/dev/nbd1", 00:06:45.931 "bdev_name": "Malloc1" 00:06:45.931 } 00:06:45.931 ]' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.931 { 00:06:45.931 "nbd_device": "/dev/nbd0", 00:06:45.931 "bdev_name": "Malloc0" 00:06:45.931 }, 00:06:45.931 { 00:06:45.931 "nbd_device": "/dev/nbd1", 00:06:45.931 "bdev_name": "Malloc1" 00:06:45.931 } 00:06:45.931 ]' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.931 /dev/nbd1' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.931 /dev/nbd1' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.931 256+0 records in 00:06:45.931 256+0 records out 00:06:45.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107624 s, 97.4 MB/s 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.931 256+0 records in 00:06:45.931 256+0 records out 00:06:45.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145947 s, 71.8 MB/s 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.931 256+0 records in 00:06:45.931 256+0 records out 00:06:45.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152281 s, 68.9 MB/s 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.931 05:32:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.190 05:32:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.449 05:32:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.708 05:32:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.708 05:32:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.968 05:32:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.968 [2024-12-10 05:32:04.897120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.227 [2024-12-10 05:32:04.934135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.227 [2024-12-10 05:32:04.934135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.227 [2024-12-10 05:32:04.974569] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.227 [2024-12-10 05:32:04.974610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.515 05:32:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4130384 /var/tmp/spdk-nbd.sock 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4130384 ']' 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:50.515 05:32:07 event.app_repeat -- event/event.sh@39 -- # killprocess 4130384 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4130384 ']' 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4130384 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.515 05:32:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4130384 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4130384' 00:06:50.515 killing process with pid 4130384 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4130384 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4130384 00:06:50.515 spdk_app_start is called in Round 0. 00:06:50.515 Shutdown signal received, stop current app iteration 00:06:50.515 Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 reinitialization... 00:06:50.515 spdk_app_start is called in Round 1. 00:06:50.515 Shutdown signal received, stop current app iteration 00:06:50.515 Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 reinitialization... 00:06:50.515 spdk_app_start is called in Round 2. 00:06:50.515 Shutdown signal received, stop current app iteration 00:06:50.515 Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 reinitialization... 00:06:50.515 spdk_app_start is called in Round 3. 00:06:50.515 Shutdown signal received, stop current app iteration 00:06:50.515 05:32:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.515 05:32:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:50.515 00:06:50.515 real 0m16.414s 00:06:50.515 user 0m36.132s 00:06:50.515 sys 0m2.484s 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.515 05:32:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.515 ************************************ 00:06:50.515 END TEST app_repeat 00:06:50.515 ************************************ 00:06:50.515 05:32:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.515 05:32:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:50.515 05:32:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.515 05:32:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.515 05:32:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.515 ************************************ 00:06:50.515 START TEST cpu_locks 00:06:50.515 ************************************ 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:50.515 * Looking for test storage... 00:06:50.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.515 05:32:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.515 --rc genhtml_branch_coverage=1 00:06:50.515 --rc genhtml_function_coverage=1 00:06:50.515 --rc genhtml_legend=1 00:06:50.515 --rc geninfo_all_blocks=1 00:06:50.515 --rc geninfo_unexecuted_blocks=1 00:06:50.515 00:06:50.515 ' 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.515 --rc genhtml_branch_coverage=1 00:06:50.515 --rc genhtml_function_coverage=1 00:06:50.515 --rc genhtml_legend=1 00:06:50.515 --rc geninfo_all_blocks=1 00:06:50.515 --rc geninfo_unexecuted_blocks=1 00:06:50.515 00:06:50.515 ' 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.515 --rc genhtml_branch_coverage=1 00:06:50.515 --rc genhtml_function_coverage=1 00:06:50.515 --rc genhtml_legend=1 00:06:50.515 --rc geninfo_all_blocks=1 00:06:50.515 --rc geninfo_unexecuted_blocks=1 00:06:50.515 00:06:50.515 ' 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.515 --rc genhtml_branch_coverage=1 00:06:50.515 --rc genhtml_function_coverage=1 00:06:50.515 --rc genhtml_legend=1 00:06:50.515 --rc geninfo_all_blocks=1 00:06:50.515 --rc geninfo_unexecuted_blocks=1 00:06:50.515 00:06:50.515 ' 00:06:50.515 05:32:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.515 05:32:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.515 05:32:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.515 05:32:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.515 05:32:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.515 ************************************ 00:06:50.515 START TEST default_locks 00:06:50.515 ************************************ 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4133344 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4133344 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4133344 ']' 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.515 05:32:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.774 [2024-12-10 05:32:08.476311] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:50.774 [2024-12-10 05:32:08.476354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133344 ] 00:06:50.774 [2024-12-10 05:32:08.556411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.774 [2024-12-10 05:32:08.596488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.710 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.710 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:51.710 05:32:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4133344 00:06:51.710 05:32:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4133344 00:06:51.710 05:32:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.969 lslocks: write error 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4133344 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4133344 ']' 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4133344 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133344 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133344' 00:06:51.969 killing process with pid 4133344 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4133344 00:06:51.969 05:32:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4133344 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4133344 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4133344 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4133344 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4133344 ']' 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4133344) - No such process 00:06:52.228 ERROR: process (pid: 4133344) is no longer running 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.228 00:06:52.228 real 0m1.710s 00:06:52.228 user 0m1.807s 00:06:52.228 sys 0m0.587s 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.228 05:32:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 ************************************ 00:06:52.228 END TEST default_locks 00:06:52.228 ************************************ 00:06:52.228 05:32:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.228 05:32:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.228 05:32:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.228 05:32:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.486 ************************************ 00:06:52.486 START TEST default_locks_via_rpc 00:06:52.486 ************************************ 00:06:52.486 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:52.486 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4133731 00:06:52.486 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4133731 00:06:52.486 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.486 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4133731 ']' 00:06:52.486 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.487 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.487 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.487 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.487 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 [2024-12-10 05:32:10.247713] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:52.487 [2024-12-10 05:32:10.247760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133731 ] 00:06:52.487 [2024-12-10 05:32:10.327189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.487 [2024-12-10 05:32:10.367441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4133731 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4133731 00:06:52.745 05:32:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4133731 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4133731 ']' 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4133731 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133731 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133731' 00:06:53.312 killing process with pid 4133731 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4133731 00:06:53.312 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4133731 00:06:53.570 00:06:53.570 real 0m1.223s 00:06:53.571 user 0m1.177s 00:06:53.571 sys 0m0.547s 00:06:53.571 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.571 05:32:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.571 ************************************ 00:06:53.571 END TEST default_locks_via_rpc 00:06:53.571 ************************************ 00:06:53.571 05:32:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:53.571 05:32:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.571 05:32:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.571 05:32:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.571 ************************************ 00:06:53.571 START TEST non_locking_app_on_locked_coremask 00:06:53.571 ************************************ 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4133938 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4133938 /var/tmp/spdk.sock 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4133938 ']' 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.571 05:32:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.829 [2024-12-10 05:32:11.541326] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:53.829 [2024-12-10 05:32:11.541369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133938 ] 00:06:53.829 [2024-12-10 05:32:11.621976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.829 [2024-12-10 05:32:11.662412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.765 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.765 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.765 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4134085 00:06:54.765 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4134085 /var/tmp/spdk2.sock 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4134085 ']' 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.766 05:32:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.766 [2024-12-10 05:32:12.412629] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:54.766 [2024-12-10 05:32:12.412676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134085 ] 00:06:54.766 [2024-12-10 05:32:12.506908] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.766 [2024-12-10 05:32:12.506935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.766 [2024-12-10 05:32:12.588263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.333 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.333 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.333 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4133938 00:06:55.333 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.333 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4133938 00:06:55.900 lslocks: write error 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4133938 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4133938 ']' 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4133938 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133938 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133938' 00:06:55.900 killing process with pid 4133938 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4133938 00:06:55.900 05:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4133938 00:06:56.468 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4134085 00:06:56.468 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4134085 ']' 00:06:56.468 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4134085 00:06:56.468 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.468 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.468 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134085 00:06:56.726 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.726 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.726 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134085' 00:06:56.726 killing process with pid 4134085 00:06:56.726 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4134085 00:06:56.726 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4134085 00:06:56.985 00:06:56.985 real 0m3.270s 00:06:56.986 user 0m3.551s 00:06:56.986 sys 0m0.986s 00:06:56.986 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.986 05:32:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 ************************************ 00:06:56.986 END TEST non_locking_app_on_locked_coremask 00:06:56.986 ************************************ 00:06:56.986 05:32:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.986 05:32:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.986 05:32:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.986 05:32:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 ************************************ 00:06:56.986 START TEST locking_app_on_unlocked_coremask 00:06:56.986 ************************************ 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4134573 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4134573 /var/tmp/spdk.sock 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4134573 ']' 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.986 05:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 [2024-12-10 05:32:14.882333] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:56.986 [2024-12-10 05:32:14.882380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134573 ] 00:06:57.245 [2024-12-10 05:32:14.959395] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.245 [2024-12-10 05:32:14.959420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.245 [2024-12-10 05:32:14.997596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4134584 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4134584 /var/tmp/spdk2.sock 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4134584 ']' 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.504 05:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.504 [2024-12-10 05:32:15.268649] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:57.504 [2024-12-10 05:32:15.268694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134584 ] 00:06:57.504 [2024-12-10 05:32:15.364992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.504 [2024-12-10 05:32:15.443698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.440 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.440 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.440 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4134584 00:06:58.440 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4134584 00:06:58.440 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.030 lslocks: write error 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4134573 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4134573 ']' 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4134573 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134573 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134573' 00:06:59.030 killing process with pid 4134573 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4134573 00:06:59.030 05:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4134573 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4134584 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4134584 ']' 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4134584 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134584 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134584' 00:06:59.657 killing process with pid 4134584 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4134584 00:06:59.657 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4134584 00:06:59.916 00:06:59.916 real 0m2.833s 00:06:59.916 user 0m2.962s 00:06:59.916 sys 0m0.976s 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.916 ************************************ 00:06:59.916 END TEST locking_app_on_unlocked_coremask 00:06:59.916 ************************************ 00:06:59.916 05:32:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.916 05:32:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.916 05:32:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.916 05:32:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.916 ************************************ 00:06:59.916 START TEST locking_app_on_locked_coremask 00:06:59.916 ************************************ 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4135070 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4135070 /var/tmp/spdk.sock 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4135070 ']' 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.916 05:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.916 [2024-12-10 05:32:17.785085] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:06:59.916 [2024-12-10 05:32:17.785130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135070 ] 00:06:59.916 [2024-12-10 05:32:17.863913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.175 [2024-12-10 05:32:17.900408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4135082 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4135082 /var/tmp/spdk2.sock 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4135082 /var/tmp/spdk2.sock 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4135082 /var/tmp/spdk2.sock 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4135082 ']' 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.175 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.434 [2024-12-10 05:32:18.172137] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:00.434 [2024-12-10 05:32:18.172184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135082 ] 00:07:00.434 [2024-12-10 05:32:18.273788] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4135070 has claimed it. 00:07:00.434 [2024-12-10 05:32:18.273825] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4135082) - No such process 00:07:01.002 ERROR: process (pid: 4135082) is no longer running 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4135070 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4135070 00:07:01.002 05:32:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.261 lslocks: write error 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4135070 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4135070 ']' 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4135070 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4135070 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4135070' 00:07:01.261 killing process with pid 4135070 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4135070 00:07:01.261 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4135070 00:07:01.520 00:07:01.520 real 0m1.733s 00:07:01.520 user 0m1.858s 00:07:01.520 sys 0m0.593s 00:07:01.520 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.520 05:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.520 ************************************ 00:07:01.520 END TEST locking_app_on_locked_coremask 00:07:01.520 ************************************ 00:07:01.779 05:32:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.779 05:32:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.779 05:32:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.779 05:32:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.779 ************************************ 00:07:01.779 START TEST locking_overlapped_coremask 00:07:01.779 ************************************ 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4135339 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4135339 /var/tmp/spdk.sock 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4135339 ']' 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.779 05:32:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.779 [2024-12-10 05:32:19.587057] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:01.779 [2024-12-10 05:32:19.587099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135339 ] 00:07:01.779 [2024-12-10 05:32:19.667890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.779 [2024-12-10 05:32:19.710825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.779 [2024-12-10 05:32:19.710932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.779 [2024-12-10 05:32:19.710932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4135566 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4135566 /var/tmp/spdk2.sock 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4135566 /var/tmp/spdk2.sock 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4135566 /var/tmp/spdk2.sock 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4135566 ']' 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.715 05:32:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.715 [2024-12-10 05:32:20.481344] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:02.715 [2024-12-10 05:32:20.481395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135566 ] 00:07:02.716 [2024-12-10 05:32:20.581578] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4135339 has claimed it. 00:07:02.716 [2024-12-10 05:32:20.581612] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4135566) - No such process 00:07:03.283 ERROR: process (pid: 4135566) is no longer running 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4135339 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4135339 ']' 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4135339 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4135339 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4135339' 00:07:03.283 killing process with pid 4135339 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4135339 00:07:03.283 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4135339 00:07:03.543 00:07:03.543 real 0m1.939s 00:07:03.543 user 0m5.587s 00:07:03.543 sys 0m0.446s 00:07:03.543 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.543 05:32:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.543 ************************************ 00:07:03.543 END TEST locking_overlapped_coremask 00:07:03.543 ************************************ 00:07:03.802 05:32:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:03.802 05:32:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.802 05:32:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.802 05:32:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.802 ************************************ 00:07:03.802 START TEST locking_overlapped_coremask_via_rpc 00:07:03.802 ************************************ 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4135819 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4135819 /var/tmp/spdk.sock 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4135819 ']' 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.802 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.802 [2024-12-10 05:32:21.595702] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:03.802 [2024-12-10 05:32:21.595745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135819 ] 00:07:03.802 [2024-12-10 05:32:21.674167] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.802 [2024-12-10 05:32:21.674192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.802 [2024-12-10 05:32:21.712952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.802 [2024-12-10 05:32:21.713060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.802 [2024-12-10 05:32:21.713061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4135828 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4135828 /var/tmp/spdk2.sock 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4135828 ']' 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.061 05:32:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.061 [2024-12-10 05:32:21.978058] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:04.061 [2024-12-10 05:32:21.978108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135828 ] 00:07:04.320 [2024-12-10 05:32:22.074616] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.320 [2024-12-10 05:32:22.074645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.320 [2024-12-10 05:32:22.160365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.320 [2024-12-10 05:32:22.160480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.320 [2024-12-10 05:32:22.160482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.887 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.887 [2024-12-10 05:32:22.841285] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4135819 has claimed it. 00:07:05.146 request: 00:07:05.146 { 00:07:05.146 "method": "framework_enable_cpumask_locks", 00:07:05.146 "req_id": 1 00:07:05.146 } 00:07:05.146 Got JSON-RPC error response 00:07:05.146 response: 00:07:05.146 { 00:07:05.146 "code": -32603, 00:07:05.146 "message": "Failed to claim CPU core: 2" 00:07:05.146 } 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4135819 /var/tmp/spdk.sock 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4135819 ']' 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.146 05:32:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4135828 /var/tmp/spdk2.sock 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4135828 ']' 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.146 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.405 00:07:05.405 real 0m1.706s 00:07:05.405 user 0m0.806s 00:07:05.405 sys 0m0.154s 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.405 05:32:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.405 ************************************ 00:07:05.405 END TEST locking_overlapped_coremask_via_rpc 00:07:05.405 ************************************ 00:07:05.405 05:32:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.405 05:32:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4135819 ]] 00:07:05.405 05:32:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4135819 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4135819 ']' 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4135819 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4135819 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4135819' 00:07:05.405 killing process with pid 4135819 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4135819 00:07:05.405 05:32:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4135819 00:07:05.974 05:32:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4135828 ]] 00:07:05.974 05:32:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4135828 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4135828 ']' 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4135828 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4135828 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4135828' 00:07:05.974 killing process with pid 4135828 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4135828 00:07:05.974 05:32:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4135828 00:07:06.233 05:32:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.233 05:32:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:06.233 05:32:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4135819 ]] 00:07:06.233 05:32:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4135819 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4135819 ']' 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4135819 00:07:06.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4135819) - No such process 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4135819 is not found' 00:07:06.233 Process with pid 4135819 is not found 00:07:06.233 05:32:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4135828 ]] 00:07:06.233 05:32:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4135828 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4135828 ']' 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4135828 00:07:06.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4135828) - No such process 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4135828 is not found' 00:07:06.233 Process with pid 4135828 is not found 00:07:06.233 05:32:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.233 00:07:06.233 real 0m15.798s 00:07:06.233 user 0m27.440s 00:07:06.233 sys 0m5.283s 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.233 05:32:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.233 ************************************ 00:07:06.233 END TEST cpu_locks 00:07:06.233 ************************************ 00:07:06.233 00:07:06.233 real 0m40.134s 00:07:06.233 user 1m15.802s 00:07:06.233 sys 0m8.750s 00:07:06.233 05:32:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.233 05:32:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.233 ************************************ 00:07:06.233 END TEST event 00:07:06.233 ************************************ 00:07:06.233 05:32:24 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:06.233 05:32:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.233 05:32:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.233 05:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:06.233 ************************************ 00:07:06.233 START TEST thread 00:07:06.233 ************************************ 00:07:06.233 05:32:24 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:06.492 * Looking for test storage... 00:07:06.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.492 05:32:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.492 05:32:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.492 05:32:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.492 05:32:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.492 05:32:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.492 05:32:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.492 05:32:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.492 05:32:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.492 05:32:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.492 05:32:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.492 05:32:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.492 05:32:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:06.492 05:32:24 thread -- scripts/common.sh@345 -- # : 1 00:07:06.492 05:32:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.492 05:32:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.492 05:32:24 thread -- scripts/common.sh@365 -- # decimal 1 00:07:06.492 05:32:24 thread -- scripts/common.sh@353 -- # local d=1 00:07:06.492 05:32:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.492 05:32:24 thread -- scripts/common.sh@355 -- # echo 1 00:07:06.492 05:32:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.492 05:32:24 thread -- scripts/common.sh@366 -- # decimal 2 00:07:06.492 05:32:24 thread -- scripts/common.sh@353 -- # local d=2 00:07:06.492 05:32:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.492 05:32:24 thread -- scripts/common.sh@355 -- # echo 2 00:07:06.492 05:32:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.492 05:32:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.492 05:32:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.492 05:32:24 thread -- scripts/common.sh@368 -- # return 0 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.492 --rc genhtml_branch_coverage=1 00:07:06.492 --rc genhtml_function_coverage=1 00:07:06.492 --rc genhtml_legend=1 00:07:06.492 --rc geninfo_all_blocks=1 00:07:06.492 --rc geninfo_unexecuted_blocks=1 00:07:06.492 00:07:06.492 ' 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.492 --rc genhtml_branch_coverage=1 00:07:06.492 --rc genhtml_function_coverage=1 00:07:06.492 --rc genhtml_legend=1 00:07:06.492 --rc geninfo_all_blocks=1 00:07:06.492 --rc geninfo_unexecuted_blocks=1 00:07:06.492 00:07:06.492 ' 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.492 --rc genhtml_branch_coverage=1 00:07:06.492 --rc genhtml_function_coverage=1 00:07:06.492 --rc genhtml_legend=1 00:07:06.492 --rc geninfo_all_blocks=1 00:07:06.492 --rc geninfo_unexecuted_blocks=1 00:07:06.492 00:07:06.492 ' 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.492 --rc genhtml_branch_coverage=1 00:07:06.492 --rc genhtml_function_coverage=1 00:07:06.492 --rc genhtml_legend=1 00:07:06.492 --rc geninfo_all_blocks=1 00:07:06.492 --rc geninfo_unexecuted_blocks=1 00:07:06.492 00:07:06.492 ' 00:07:06.492 05:32:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.492 05:32:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.492 ************************************ 00:07:06.492 START TEST thread_poller_perf 00:07:06.492 ************************************ 00:07:06.492 05:32:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.492 [2024-12-10 05:32:24.339652] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:06.492 [2024-12-10 05:32:24.339720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136388 ] 00:07:06.492 [2024-12-10 05:32:24.420659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.750 [2024-12-10 05:32:24.460350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.750 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.684 [2024-12-10T04:32:25.644Z] ====================================== 00:07:07.685 [2024-12-10T04:32:25.644Z] busy:2108333276 (cyc) 00:07:07.685 [2024-12-10T04:32:25.644Z] total_run_count: 414000 00:07:07.685 [2024-12-10T04:32:25.644Z] tsc_hz: 2100000000 (cyc) 00:07:07.685 [2024-12-10T04:32:25.644Z] ====================================== 00:07:07.685 [2024-12-10T04:32:25.644Z] poller_cost: 5092 (cyc), 2424 (nsec) 00:07:07.685 00:07:07.685 real 0m1.187s 00:07:07.685 user 0m1.107s 00:07:07.685 sys 0m0.076s 00:07:07.685 05:32:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.685 05:32:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.685 ************************************ 00:07:07.685 END TEST thread_poller_perf 00:07:07.685 ************************************ 00:07:07.685 05:32:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.685 05:32:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:07.685 05:32:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.685 05:32:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.685 ************************************ 00:07:07.685 START TEST thread_poller_perf 00:07:07.685 ************************************ 00:07:07.685 05:32:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.685 [2024-12-10 05:32:25.594841] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:07.685 [2024-12-10 05:32:25.594910] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136637 ] 00:07:07.943 [2024-12-10 05:32:25.658639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.943 [2024-12-10 05:32:25.696844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.943 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:08.880 [2024-12-10T04:32:26.839Z] ====================================== 00:07:08.880 [2024-12-10T04:32:26.839Z] busy:2101608994 (cyc) 00:07:08.880 [2024-12-10T04:32:26.839Z] total_run_count: 5000000 00:07:08.880 [2024-12-10T04:32:26.839Z] tsc_hz: 2100000000 (cyc) 00:07:08.880 [2024-12-10T04:32:26.839Z] ====================================== 00:07:08.880 [2024-12-10T04:32:26.839Z] poller_cost: 420 (cyc), 200 (nsec) 00:07:08.880 00:07:08.880 real 0m1.163s 00:07:08.880 user 0m1.100s 00:07:08.880 sys 0m0.060s 00:07:08.880 05:32:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.880 05:32:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.880 ************************************ 00:07:08.880 END TEST thread_poller_perf 00:07:08.880 ************************************ 00:07:08.880 05:32:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:08.880 00:07:08.880 real 0m2.659s 00:07:08.880 user 0m2.366s 00:07:08.880 sys 0m0.309s 00:07:08.880 05:32:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.880 05:32:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.880 ************************************ 00:07:08.880 END TEST thread 00:07:08.880 ************************************ 00:07:08.880 05:32:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:08.880 05:32:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.880 05:32:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.880 05:32:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.880 05:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.139 ************************************ 00:07:09.139 START TEST app_cmdline 00:07:09.139 ************************************ 00:07:09.139 05:32:26 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.139 * Looking for test storage... 00:07:09.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:09.139 05:32:26 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.139 05:32:26 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.139 05:32:26 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.139 05:32:27 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:09.139 05:32:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.140 05:32:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.140 --rc genhtml_branch_coverage=1 00:07:09.140 --rc genhtml_function_coverage=1 00:07:09.140 --rc genhtml_legend=1 00:07:09.140 --rc geninfo_all_blocks=1 00:07:09.140 --rc geninfo_unexecuted_blocks=1 00:07:09.140 00:07:09.140 ' 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.140 --rc genhtml_branch_coverage=1 00:07:09.140 --rc genhtml_function_coverage=1 00:07:09.140 --rc genhtml_legend=1 00:07:09.140 --rc geninfo_all_blocks=1 00:07:09.140 --rc geninfo_unexecuted_blocks=1 00:07:09.140 00:07:09.140 ' 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.140 --rc genhtml_branch_coverage=1 00:07:09.140 --rc genhtml_function_coverage=1 00:07:09.140 --rc genhtml_legend=1 00:07:09.140 --rc geninfo_all_blocks=1 00:07:09.140 --rc geninfo_unexecuted_blocks=1 00:07:09.140 00:07:09.140 ' 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.140 --rc genhtml_branch_coverage=1 00:07:09.140 --rc genhtml_function_coverage=1 00:07:09.140 --rc genhtml_legend=1 00:07:09.140 --rc geninfo_all_blocks=1 00:07:09.140 --rc geninfo_unexecuted_blocks=1 00:07:09.140 00:07:09.140 ' 00:07:09.140 05:32:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:09.140 05:32:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4136927 00:07:09.140 05:32:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4136927 00:07:09.140 05:32:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 4136927 ']' 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.140 05:32:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.140 [2024-12-10 05:32:27.073960] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:09.140 [2024-12-10 05:32:27.074006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136927 ] 00:07:09.399 [2024-12-10 05:32:27.153287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.399 [2024-12-10 05:32:27.193815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.658 05:32:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.658 05:32:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.658 { 00:07:09.658 "version": "SPDK v25.01-pre git sha1 4fb5f9881", 00:07:09.658 "fields": { 00:07:09.658 "major": 25, 00:07:09.658 "minor": 1, 00:07:09.658 "patch": 0, 00:07:09.658 "suffix": "-pre", 00:07:09.658 "commit": "4fb5f9881" 00:07:09.658 } 00:07:09.658 } 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.658 05:32:27 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.658 05:32:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:09.658 05:32:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.658 05:32:27 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.916 05:32:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.916 05:32:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.916 05:32:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.916 request: 00:07:09.916 { 00:07:09.916 "method": "env_dpdk_get_mem_stats", 00:07:09.916 "req_id": 1 00:07:09.916 } 00:07:09.916 Got JSON-RPC error response 00:07:09.916 response: 00:07:09.916 { 00:07:09.916 "code": -32601, 00:07:09.916 "message": "Method not found" 00:07:09.916 } 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.916 05:32:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4136927 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 4136927 ']' 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 4136927 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.916 05:32:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4136927 00:07:10.175 05:32:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.175 05:32:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.175 05:32:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4136927' 00:07:10.175 killing process with pid 4136927 00:07:10.175 05:32:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 4136927 00:07:10.175 05:32:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 4136927 00:07:10.434 00:07:10.434 real 0m1.334s 00:07:10.434 user 0m1.570s 00:07:10.434 sys 0m0.428s 00:07:10.434 05:32:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.434 05:32:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.434 ************************************ 00:07:10.434 END TEST app_cmdline 00:07:10.434 ************************************ 00:07:10.434 05:32:28 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.434 05:32:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.434 05:32:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.434 05:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:10.434 ************************************ 00:07:10.434 START TEST version 00:07:10.434 ************************************ 00:07:10.434 05:32:28 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.434 * Looking for test storage... 00:07:10.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.434 05:32:28 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.434 05:32:28 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.434 05:32:28 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.694 05:32:28 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.694 05:32:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.694 05:32:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.694 05:32:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.694 05:32:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.694 05:32:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.694 05:32:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.694 05:32:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.694 05:32:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.694 05:32:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.694 05:32:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.694 05:32:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.694 05:32:28 version -- scripts/common.sh@344 -- # case "$op" in 00:07:10.694 05:32:28 version -- scripts/common.sh@345 -- # : 1 00:07:10.694 05:32:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.694 05:32:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.694 05:32:28 version -- scripts/common.sh@365 -- # decimal 1 00:07:10.694 05:32:28 version -- scripts/common.sh@353 -- # local d=1 00:07:10.694 05:32:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.694 05:32:28 version -- scripts/common.sh@355 -- # echo 1 00:07:10.694 05:32:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.694 05:32:28 version -- scripts/common.sh@366 -- # decimal 2 00:07:10.694 05:32:28 version -- scripts/common.sh@353 -- # local d=2 00:07:10.694 05:32:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.694 05:32:28 version -- scripts/common.sh@355 -- # echo 2 00:07:10.694 05:32:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.694 05:32:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.694 05:32:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.694 05:32:28 version -- scripts/common.sh@368 -- # return 0 00:07:10.694 05:32:28 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.694 05:32:28 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.694 --rc genhtml_branch_coverage=1 00:07:10.694 --rc genhtml_function_coverage=1 00:07:10.694 --rc genhtml_legend=1 00:07:10.694 --rc geninfo_all_blocks=1 00:07:10.694 --rc geninfo_unexecuted_blocks=1 00:07:10.694 00:07:10.694 ' 00:07:10.694 05:32:28 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.694 --rc genhtml_branch_coverage=1 00:07:10.694 --rc genhtml_function_coverage=1 00:07:10.694 --rc genhtml_legend=1 00:07:10.694 --rc geninfo_all_blocks=1 00:07:10.694 --rc geninfo_unexecuted_blocks=1 00:07:10.694 00:07:10.694 ' 00:07:10.694 05:32:28 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.694 --rc genhtml_branch_coverage=1 00:07:10.694 --rc genhtml_function_coverage=1 00:07:10.694 --rc genhtml_legend=1 00:07:10.694 --rc geninfo_all_blocks=1 00:07:10.694 --rc geninfo_unexecuted_blocks=1 00:07:10.694 00:07:10.694 ' 00:07:10.694 05:32:28 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.694 --rc genhtml_branch_coverage=1 00:07:10.694 --rc genhtml_function_coverage=1 00:07:10.694 --rc genhtml_legend=1 00:07:10.694 --rc geninfo_all_blocks=1 00:07:10.694 --rc geninfo_unexecuted_blocks=1 00:07:10.694 00:07:10.694 ' 00:07:10.694 05:32:28 version -- app/version.sh@17 -- # get_header_version major 00:07:10.694 05:32:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.694 05:32:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.694 05:32:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.694 05:32:28 version -- app/version.sh@17 -- # major=25 00:07:10.694 05:32:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.694 05:32:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.694 05:32:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.694 05:32:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.694 05:32:28 version -- app/version.sh@18 -- # minor=1 00:07:10.694 05:32:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.694 05:32:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.694 05:32:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.694 05:32:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.694 05:32:28 version -- app/version.sh@19 -- # patch=0 00:07:10.695 05:32:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.695 05:32:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.695 05:32:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.695 05:32:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.695 05:32:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.695 05:32:28 version -- app/version.sh@22 -- # version=25.1 00:07:10.695 05:32:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.695 05:32:28 version -- app/version.sh@28 -- # version=25.1rc0 00:07:10.695 05:32:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.695 05:32:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.695 05:32:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:10.695 05:32:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:10.695 00:07:10.695 real 0m0.246s 00:07:10.695 user 0m0.147s 00:07:10.695 sys 0m0.142s 00:07:10.695 05:32:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.695 05:32:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.695 ************************************ 00:07:10.695 END TEST version 00:07:10.695 ************************************ 00:07:10.695 05:32:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:10.695 05:32:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:10.695 05:32:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:10.695 05:32:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.695 05:32:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.695 05:32:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:10.695 05:32:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.695 05:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:10.695 05:32:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:10.695 05:32:28 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:10.695 05:32:28 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.695 05:32:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.695 05:32:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.695 05:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:10.695 ************************************ 00:07:10.695 START TEST nvmf_tcp 00:07:10.695 ************************************ 00:07:10.695 05:32:28 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.954 * Looking for test storage... 00:07:10.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.954 05:32:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.954 --rc genhtml_branch_coverage=1 00:07:10.954 --rc genhtml_function_coverage=1 00:07:10.954 --rc genhtml_legend=1 00:07:10.954 --rc geninfo_all_blocks=1 00:07:10.954 --rc geninfo_unexecuted_blocks=1 00:07:10.954 00:07:10.954 ' 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.954 --rc genhtml_branch_coverage=1 00:07:10.954 --rc genhtml_function_coverage=1 00:07:10.954 --rc genhtml_legend=1 00:07:10.954 --rc geninfo_all_blocks=1 00:07:10.954 --rc geninfo_unexecuted_blocks=1 00:07:10.954 00:07:10.954 ' 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.954 --rc genhtml_branch_coverage=1 00:07:10.954 --rc genhtml_function_coverage=1 00:07:10.954 --rc genhtml_legend=1 00:07:10.954 --rc geninfo_all_blocks=1 00:07:10.954 --rc geninfo_unexecuted_blocks=1 00:07:10.954 00:07:10.954 ' 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.954 --rc genhtml_branch_coverage=1 00:07:10.954 --rc genhtml_function_coverage=1 00:07:10.954 --rc genhtml_legend=1 00:07:10.954 --rc geninfo_all_blocks=1 00:07:10.954 --rc geninfo_unexecuted_blocks=1 00:07:10.954 00:07:10.954 ' 00:07:10.954 05:32:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.954 05:32:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.954 05:32:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.954 05:32:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.954 ************************************ 00:07:10.954 START TEST nvmf_target_core 00:07:10.954 ************************************ 00:07:10.954 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:10.954 * Looking for test storage... 00:07:10.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.215 --rc genhtml_branch_coverage=1 00:07:11.215 --rc genhtml_function_coverage=1 00:07:11.215 --rc genhtml_legend=1 00:07:11.215 --rc geninfo_all_blocks=1 00:07:11.215 --rc geninfo_unexecuted_blocks=1 00:07:11.215 00:07:11.215 ' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.215 --rc genhtml_branch_coverage=1 00:07:11.215 --rc genhtml_function_coverage=1 00:07:11.215 --rc genhtml_legend=1 00:07:11.215 --rc geninfo_all_blocks=1 00:07:11.215 --rc geninfo_unexecuted_blocks=1 00:07:11.215 00:07:11.215 ' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.215 --rc genhtml_branch_coverage=1 00:07:11.215 --rc genhtml_function_coverage=1 00:07:11.215 --rc genhtml_legend=1 00:07:11.215 --rc geninfo_all_blocks=1 00:07:11.215 --rc geninfo_unexecuted_blocks=1 00:07:11.215 00:07:11.215 ' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.215 --rc genhtml_branch_coverage=1 00:07:11.215 --rc genhtml_function_coverage=1 00:07:11.215 --rc genhtml_legend=1 00:07:11.215 --rc geninfo_all_blocks=1 00:07:11.215 --rc geninfo_unexecuted_blocks=1 00:07:11.215 00:07:11.215 ' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.215 05:32:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.215 05:32:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.215 ************************************ 00:07:11.215 START TEST nvmf_abort 00:07:11.215 ************************************ 00:07:11.216 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:11.216 * Looking for test storage... 00:07:11.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.216 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.216 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.216 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.476 --rc genhtml_branch_coverage=1 00:07:11.476 --rc genhtml_function_coverage=1 00:07:11.476 --rc genhtml_legend=1 00:07:11.476 --rc geninfo_all_blocks=1 00:07:11.476 --rc geninfo_unexecuted_blocks=1 00:07:11.476 00:07:11.476 ' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.476 --rc genhtml_branch_coverage=1 00:07:11.476 --rc genhtml_function_coverage=1 00:07:11.476 --rc genhtml_legend=1 00:07:11.476 --rc geninfo_all_blocks=1 00:07:11.476 --rc geninfo_unexecuted_blocks=1 00:07:11.476 00:07:11.476 ' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.476 --rc genhtml_branch_coverage=1 00:07:11.476 --rc genhtml_function_coverage=1 00:07:11.476 --rc genhtml_legend=1 00:07:11.476 --rc geninfo_all_blocks=1 00:07:11.476 --rc geninfo_unexecuted_blocks=1 00:07:11.476 00:07:11.476 ' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.476 --rc genhtml_branch_coverage=1 00:07:11.476 --rc genhtml_function_coverage=1 00:07:11.476 --rc genhtml_legend=1 00:07:11.476 --rc geninfo_all_blocks=1 00:07:11.476 --rc geninfo_unexecuted_blocks=1 00:07:11.476 00:07:11.476 ' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.476 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.477 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:18.046 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:18.047 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:18.047 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:18.047 Found net devices under 0000:af:00.0: cvl_0_0 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:18.047 Found net devices under 0000:af:00.1: cvl_0_1 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.047 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:07:18.306 00:07:18.306 --- 10.0.0.2 ping statistics --- 00:07:18.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.306 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:18.306 00:07:18.306 --- 10.0.0.1 ping statistics --- 00:07:18.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.306 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:18.306 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4140870 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4140870 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4140870 ']' 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.307 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.307 [2024-12-10 05:32:36.164753] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:18.307 [2024-12-10 05:32:36.164796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.307 [2024-12-10 05:32:36.237669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.566 [2024-12-10 05:32:36.278594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.566 [2024-12-10 05:32:36.278629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.566 [2024-12-10 05:32:36.278637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.566 [2024-12-10 05:32:36.278643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.566 [2024-12-10 05:32:36.278648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.566 [2024-12-10 05:32:36.279968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.566 [2024-12-10 05:32:36.280076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.566 [2024-12-10 05:32:36.280077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 [2024-12-10 05:32:36.428536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 Malloc0 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 Delay0 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.566 [2024-12-10 05:32:36.511292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.566 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.825 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.825 05:32:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:18.825 [2024-12-10 05:32:36.643945] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:20.726 Initializing NVMe Controllers 00:07:20.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:20.726 controller IO queue size 128 less than required 00:07:20.726 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:20.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:20.726 Initialization complete. Launching workers. 00:07:20.726 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37249 00:07:20.726 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37310, failed to submit 62 00:07:20.726 success 37253, unsuccessful 57, failed 0 00:07:20.726 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.727 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.727 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.985 rmmod nvme_tcp 00:07:20.985 rmmod nvme_fabrics 00:07:20.985 rmmod nvme_keyring 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4140870 ']' 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4140870 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4140870 ']' 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4140870 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4140870 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4140870' 00:07:20.985 killing process with pid 4140870 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4140870 00:07:20.985 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4140870 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:21.245 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.245 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.245 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.245 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.245 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.245 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.150 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:23.150 00:07:23.150 real 0m12.008s 00:07:23.150 user 0m11.776s 00:07:23.150 sys 0m6.031s 00:07:23.150 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.150 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.150 ************************************ 00:07:23.150 END TEST nvmf_abort 00:07:23.150 ************************************ 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.409 ************************************ 00:07:23.409 START TEST nvmf_ns_hotplug_stress 00:07:23.409 ************************************ 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:23.409 * Looking for test storage... 00:07:23.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.409 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.410 --rc genhtml_branch_coverage=1 00:07:23.410 --rc genhtml_function_coverage=1 00:07:23.410 --rc genhtml_legend=1 00:07:23.410 --rc geninfo_all_blocks=1 00:07:23.410 --rc geninfo_unexecuted_blocks=1 00:07:23.410 00:07:23.410 ' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.410 --rc genhtml_branch_coverage=1 00:07:23.410 --rc genhtml_function_coverage=1 00:07:23.410 --rc genhtml_legend=1 00:07:23.410 --rc geninfo_all_blocks=1 00:07:23.410 --rc geninfo_unexecuted_blocks=1 00:07:23.410 00:07:23.410 ' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.410 --rc genhtml_branch_coverage=1 00:07:23.410 --rc genhtml_function_coverage=1 00:07:23.410 --rc genhtml_legend=1 00:07:23.410 --rc geninfo_all_blocks=1 00:07:23.410 --rc geninfo_unexecuted_blocks=1 00:07:23.410 00:07:23.410 ' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.410 --rc genhtml_branch_coverage=1 00:07:23.410 --rc genhtml_function_coverage=1 00:07:23.410 --rc genhtml_legend=1 00:07:23.410 --rc geninfo_all_blocks=1 00:07:23.410 --rc geninfo_unexecuted_blocks=1 00:07:23.410 00:07:23.410 ' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.410 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.411 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.411 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.670 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:23.670 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:23.670 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.670 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:30.239 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:30.239 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:30.239 Found net devices under 0000:af:00.0: cvl_0_0 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:30.239 Found net devices under 0000:af:00.1: cvl_0_1 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.239 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.239 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.239 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.239 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.239 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.239 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.239 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:07:30.240 00:07:30.240 --- 10.0.0.2 ping statistics --- 00:07:30.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.240 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:07:30.240 00:07:30.240 --- 10.0.0.1 ping statistics --- 00:07:30.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.240 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4145365 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4145365 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4145365 ']' 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.240 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:30.499 [2024-12-10 05:32:48.218823] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:07:30.499 [2024-12-10 05:32:48.218866] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.499 [2024-12-10 05:32:48.303841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.499 [2024-12-10 05:32:48.343620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.499 [2024-12-10 05:32:48.343658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.499 [2024-12-10 05:32:48.343665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.499 [2024-12-10 05:32:48.343671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.499 [2024-12-10 05:32:48.343676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.499 [2024-12-10 05:32:48.345007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.499 [2024-12-10 05:32:48.345099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.499 [2024-12-10 05:32:48.345101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.434 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.434 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:31.434 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.434 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.434 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.435 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.435 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:31.435 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.435 [2024-12-10 05:32:49.280472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.435 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:31.693 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.693 [2024-12-10 05:32:49.645776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.952 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.952 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:32.210 Malloc0 00:07:32.210 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.469 Delay0 00:07:32.469 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.728 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:32.986 NULL1 00:07:32.986 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:32.986 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:32.986 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4145850 00:07:32.986 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:32.986 05:32:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.362 Read completed with error (sct=0, sc=11) 00:07:34.362 05:32:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.362 05:32:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:34.362 05:32:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:34.621 true 00:07:34.621 05:32:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:34.621 05:32:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.557 05:32:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.557 05:32:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:35.557 05:32:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:35.815 true 00:07:35.815 05:32:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:35.815 05:32:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.073 05:32:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.332 05:32:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:36.332 05:32:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:36.332 true 00:07:36.332 05:32:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:36.332 05:32:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.709 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.709 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:37.709 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:37.967 true 00:07:37.967 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:37.967 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.903 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.903 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:38.903 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:39.162 true 00:07:39.162 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:39.162 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.421 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.421 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:39.421 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:39.679 true 00:07:39.679 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:39.679 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.056 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.056 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:41.056 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:41.315 true 00:07:41.315 05:32:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:41.315 05:32:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.250 05:32:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.250 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:42.250 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:42.508 true 00:07:42.508 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:42.508 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.766 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.025 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:43.025 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:43.025 true 00:07:43.025 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:43.025 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.401 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.401 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:44.401 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:44.660 true 00:07:44.660 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:44.660 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.596 05:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.596 05:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:45.596 05:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:45.855 true 00:07:45.855 05:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:45.855 05:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.113 05:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.372 05:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:46.372 05:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:46.372 true 00:07:46.372 05:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:46.372 05:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.820 05:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.820 05:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:47.820 05:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:48.208 true 00:07:48.208 05:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:48.208 05:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.776 05:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.034 05:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:49.034 05:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:49.293 true 00:07:49.293 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:49.293 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.293 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.552 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:49.552 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:49.810 true 00:07:49.810 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:49.810 05:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.745 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.004 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:51.005 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:51.263 true 00:07:51.263 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:51.263 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.199 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.199 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:52.199 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:52.457 true 00:07:52.457 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:52.457 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.715 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.715 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:52.715 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:52.973 true 00:07:52.973 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:52.973 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.348 05:33:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.348 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:54.348 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:54.606 true 00:07:54.606 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:54.606 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.431 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.431 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:55.431 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:55.690 true 00:07:55.690 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:55.690 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.948 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.206 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:56.206 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:56.206 true 00:07:56.206 05:33:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:56.206 05:33:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.582 05:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.582 05:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:57.582 05:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:57.840 true 00:07:57.840 05:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:57.840 05:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.775 05:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.775 05:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:58.775 05:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:59.033 true 00:07:59.033 05:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:59.033 05:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.291 05:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.550 05:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:59.550 05:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:59.550 true 00:07:59.550 05:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:07:59.550 05:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.926 05:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.926 05:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:00.926 05:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:01.184 true 00:08:01.184 05:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:08:01.184 05:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.118 05:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.118 05:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:02.118 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:02.377 true 00:08:02.377 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:08:02.377 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.635 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.893 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:02.893 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:02.893 true 00:08:02.893 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:08:02.893 05:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.269 Initializing NVMe Controllers 00:08:04.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:04.269 Controller IO queue size 128, less than required. 00:08:04.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.269 Controller IO queue size 128, less than required. 00:08:04.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:04.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:04.269 Initialization complete. Launching workers. 00:08:04.269 ======================================================== 00:08:04.269 Latency(us) 00:08:04.269 Device Information : IOPS MiB/s Average min max 00:08:04.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2136.99 1.04 43569.07 2407.43 1012988.78 00:08:04.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18439.20 9.00 6941.70 2277.83 368826.62 00:08:04.269 ======================================================== 00:08:04.269 Total : 20576.19 10.05 10745.72 2277.83 1012988.78 00:08:04.269 00:08:04.269 05:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.269 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:04.269 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:04.528 true 00:08:04.528 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4145850 00:08:04.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4145850) - No such process 00:08:04.528 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4145850 00:08:04.528 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.786 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.786 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:04.786 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:04.786 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:04.786 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.786 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:05.045 null0 00:08:05.045 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.045 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.045 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:05.303 null1 00:08:05.303 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.303 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.303 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:05.561 null2 00:08:05.561 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.561 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.561 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:05.561 null3 00:08:05.561 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.561 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.561 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:05.819 null4 00:08:05.819 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.819 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.819 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:06.078 null5 00:08:06.078 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:06.078 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:06.078 05:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:06.336 null6 00:08:06.336 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:06.336 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:06.336 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:06.336 null7 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.595 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4151961 4151964 4151967 4151970 4151974 4151977 4151978 4151981 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.596 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.855 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.856 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.856 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.856 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.856 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.856 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.856 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.114 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.375 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.375 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.375 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.375 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.376 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.636 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.895 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.153 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.153 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.153 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.153 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.154 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.412 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.412 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.412 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.412 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.412 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.412 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.413 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.671 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.930 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.189 05:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.447 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.706 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.965 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.224 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.224 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.224 05:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.224 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.225 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.484 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.743 rmmod nvme_tcp 00:08:10.743 rmmod nvme_fabrics 00:08:10.743 rmmod nvme_keyring 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4145365 ']' 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4145365 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4145365 ']' 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4145365 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4145365 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4145365' 00:08:10.743 killing process with pid 4145365 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4145365 00:08:10.743 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4145365 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.002 05:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.907 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.907 00:08:12.907 real 0m49.678s 00:08:12.907 user 3m17.718s 00:08:12.907 sys 0m15.990s 00:08:12.907 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.907 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:12.907 ************************************ 00:08:12.907 END TEST nvmf_ns_hotplug_stress 00:08:12.907 ************************************ 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.166 ************************************ 00:08:13.166 START TEST nvmf_delete_subsystem 00:08:13.166 ************************************ 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:13.166 * Looking for test storage... 00:08:13.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.166 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:13.166 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:13.166 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:13.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.167 --rc genhtml_branch_coverage=1 00:08:13.167 --rc genhtml_function_coverage=1 00:08:13.167 --rc genhtml_legend=1 00:08:13.167 --rc geninfo_all_blocks=1 00:08:13.167 --rc geninfo_unexecuted_blocks=1 00:08:13.167 00:08:13.167 ' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:13.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.167 --rc genhtml_branch_coverage=1 00:08:13.167 --rc genhtml_function_coverage=1 00:08:13.167 --rc genhtml_legend=1 00:08:13.167 --rc geninfo_all_blocks=1 00:08:13.167 --rc geninfo_unexecuted_blocks=1 00:08:13.167 00:08:13.167 ' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:13.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.167 --rc genhtml_branch_coverage=1 00:08:13.167 --rc genhtml_function_coverage=1 00:08:13.167 --rc genhtml_legend=1 00:08:13.167 --rc geninfo_all_blocks=1 00:08:13.167 --rc geninfo_unexecuted_blocks=1 00:08:13.167 00:08:13.167 ' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:13.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.167 --rc genhtml_branch_coverage=1 00:08:13.167 --rc genhtml_function_coverage=1 00:08:13.167 --rc genhtml_legend=1 00:08:13.167 --rc geninfo_all_blocks=1 00:08:13.167 --rc geninfo_unexecuted_blocks=1 00:08:13.167 00:08:13.167 ' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.167 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.168 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.427 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.427 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.427 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.427 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.996 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:19.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:19.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:19.997 Found net devices under 0000:af:00.0: cvl_0_0 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:19.997 Found net devices under 0000:af:00.1: cvl_0_1 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:08:19.997 00:08:19.997 --- 10.0.0.2 ping statistics --- 00:08:19.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.997 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:19.997 00:08:19.997 --- 10.0.0.1 ping statistics --- 00:08:19.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.997 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.997 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4156764 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4156764 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4156764 ']' 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.256 05:33:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.256 [2024-12-10 05:33:38.000044] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:08:20.256 [2024-12-10 05:33:38.000091] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.256 [2024-12-10 05:33:38.085725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.256 [2024-12-10 05:33:38.125583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.256 [2024-12-10 05:33:38.125615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.256 [2024-12-10 05:33:38.125621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.256 [2024-12-10 05:33:38.125627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.256 [2024-12-10 05:33:38.125632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.256 [2024-12-10 05:33:38.126728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.256 [2024-12-10 05:33:38.126731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.514 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 [2024-12-10 05:33:38.275808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 [2024-12-10 05:33:38.295994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 NULL1 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 Delay0 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4156998 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:20.515 05:33:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:20.515 [2024-12-10 05:33:38.407769] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.415 05:33:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.415 05:33:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.415 05:33:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 [2024-12-10 05:33:40.566613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd780 is same with the state(6) to be set 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Write completed with error (sct=0, sc=8) 00:08:22.673 starting I/O failed: -6 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.673 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 starting I/O failed: -6 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 [2024-12-10 05:33:40.567011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe75000d390 is same with the state(6) to be set 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Read completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 Write completed with error (sct=0, sc=8) 00:08:22.674 [2024-12-10 05:33:40.567457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe750000c80 is same with the state(6) to be set 00:08:23.608 [2024-12-10 05:33:41.542216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6de9b0 is same with the state(6) to be set 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 [2024-12-10 05:33:41.568699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd960 is same with the state(6) to be set 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 [2024-12-10 05:33:41.568856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ddb40 is same with the state(6) to be set 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 [2024-12-10 05:33:41.569766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe75000d6c0 is same with the state(6) to be set 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Read completed with error (sct=0, sc=8) 00:08:23.867 Write completed with error (sct=0, sc=8) 00:08:23.867 [2024-12-10 05:33:41.570617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2c0 is same with the state(6) to be set 00:08:23.867 Initializing NVMe Controllers 00:08:23.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:23.867 Controller IO queue size 128, less than required. 00:08:23.867 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:23.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:23.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:23.868 Initialization complete. Launching workers. 00:08:23.868 ======================================================== 00:08:23.868 Latency(us) 00:08:23.868 Device Information : IOPS MiB/s Average min max 00:08:23.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 192.85 0.09 944375.17 942.34 1009597.25 00:08:23.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.07 0.08 873812.51 254.22 1010855.99 00:08:23.868 ======================================================== 00:08:23.868 Total : 348.92 0.17 912812.96 254.22 1010855.99 00:08:23.868 00:08:23.868 [2024-12-10 05:33:41.571236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6de9b0 (9): Bad file descriptor 00:08:23.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:23.868 05:33:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.868 05:33:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:23.868 05:33:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4156998 00:08:23.868 05:33:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:24.126 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:24.126 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4156998 00:08:24.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4156998) - No such process 00:08:24.126 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4156998 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4156998 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4156998 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.384 [2024-12-10 05:33:42.102999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4157483 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:24.384 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.384 [2024-12-10 05:33:42.190047] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:24.951 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.951 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:24.951 05:33:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.209 05:33:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.209 05:33:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:25.209 05:33:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.774 05:33:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.774 05:33:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:25.774 05:33:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.340 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.340 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:26.340 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.904 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.904 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:26.904 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.470 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.470 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:27.470 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.470 Initializing NVMe Controllers 00:08:27.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.470 Controller IO queue size 128, less than required. 00:08:27.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:27.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:27.470 Initialization complete. Launching workers. 00:08:27.470 ======================================================== 00:08:27.470 Latency(us) 00:08:27.470 Device Information : IOPS MiB/s Average min max 00:08:27.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002252.74 1000137.70 1041876.77 00:08:27.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003807.07 1000162.98 1042067.93 00:08:27.470 ======================================================== 00:08:27.470 Total : 256.00 0.12 1003029.91 1000137.70 1042067.93 00:08:27.470 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157483 00:08:27.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4157483) - No such process 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4157483 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.728 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.728 rmmod nvme_tcp 00:08:27.728 rmmod nvme_fabrics 00:08:27.987 rmmod nvme_keyring 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4156764 ']' 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4156764 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4156764 ']' 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4156764 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4156764 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4156764' 00:08:27.987 killing process with pid 4156764 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4156764 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4156764 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.987 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.521 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.521 00:08:30.521 real 0m17.096s 00:08:30.521 user 0m29.654s 00:08:30.521 sys 0m6.098s 00:08:30.521 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.521 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.522 ************************************ 00:08:30.522 END TEST nvmf_delete_subsystem 00:08:30.522 ************************************ 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.522 ************************************ 00:08:30.522 START TEST nvmf_host_management 00:08:30.522 ************************************ 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.522 * Looking for test storage... 00:08:30.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.522 --rc genhtml_branch_coverage=1 00:08:30.522 --rc genhtml_function_coverage=1 00:08:30.522 --rc genhtml_legend=1 00:08:30.522 --rc geninfo_all_blocks=1 00:08:30.522 --rc geninfo_unexecuted_blocks=1 00:08:30.522 00:08:30.522 ' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.522 --rc genhtml_branch_coverage=1 00:08:30.522 --rc genhtml_function_coverage=1 00:08:30.522 --rc genhtml_legend=1 00:08:30.522 --rc geninfo_all_blocks=1 00:08:30.522 --rc geninfo_unexecuted_blocks=1 00:08:30.522 00:08:30.522 ' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.522 --rc genhtml_branch_coverage=1 00:08:30.522 --rc genhtml_function_coverage=1 00:08:30.522 --rc genhtml_legend=1 00:08:30.522 --rc geninfo_all_blocks=1 00:08:30.522 --rc geninfo_unexecuted_blocks=1 00:08:30.522 00:08:30.522 ' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.522 --rc genhtml_branch_coverage=1 00:08:30.522 --rc genhtml_function_coverage=1 00:08:30.522 --rc genhtml_legend=1 00:08:30.522 --rc geninfo_all_blocks=1 00:08:30.522 --rc geninfo_unexecuted_blocks=1 00:08:30.522 00:08:30.522 ' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.522 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.523 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.171 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:37.172 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:37.172 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:37.172 Found net devices under 0000:af:00.0: cvl_0_0 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:37.172 Found net devices under 0000:af:00.1: cvl_0_1 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.172 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.173 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.173 05:33:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:08:37.173 00:08:37.173 --- 10.0.0.2 ping statistics --- 00:08:37.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.173 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:37.173 00:08:37.173 --- 10.0.0.1 ping statistics --- 00:08:37.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.173 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4162163 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4162163 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4162163 ']' 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.173 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.508 [2024-12-10 05:33:55.171738] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:08:37.508 [2024-12-10 05:33:55.171790] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.508 [2024-12-10 05:33:55.258602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.508 [2024-12-10 05:33:55.298269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.508 [2024-12-10 05:33:55.298307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.508 [2024-12-10 05:33:55.298313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.508 [2024-12-10 05:33:55.298319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.508 [2024-12-10 05:33:55.298324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.508 [2024-12-10 05:33:55.299912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.508 [2024-12-10 05:33:55.300021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.508 [2024-12-10 05:33:55.300104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.508 [2024-12-10 05:33:55.300104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.076 05:33:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.076 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.076 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.076 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.076 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.334 [2024-12-10 05:33:56.043450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.334 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.335 Malloc0 00:08:38.335 [2024-12-10 05:33:56.114239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4162322 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4162322 /var/tmp/bdevperf.sock 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4162322 ']' 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.335 { 00:08:38.335 "params": { 00:08:38.335 "name": "Nvme$subsystem", 00:08:38.335 "trtype": "$TEST_TRANSPORT", 00:08:38.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.335 "adrfam": "ipv4", 00:08:38.335 "trsvcid": "$NVMF_PORT", 00:08:38.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.335 "hdgst": ${hdgst:-false}, 00:08:38.335 "ddgst": ${ddgst:-false} 00:08:38.335 }, 00:08:38.335 "method": "bdev_nvme_attach_controller" 00:08:38.335 } 00:08:38.335 EOF 00:08:38.335 )") 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:38.335 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.335 "params": { 00:08:38.335 "name": "Nvme0", 00:08:38.335 "trtype": "tcp", 00:08:38.335 "traddr": "10.0.0.2", 00:08:38.335 "adrfam": "ipv4", 00:08:38.335 "trsvcid": "4420", 00:08:38.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.335 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:38.335 "hdgst": false, 00:08:38.335 "ddgst": false 00:08:38.335 }, 00:08:38.335 "method": "bdev_nvme_attach_controller" 00:08:38.335 }' 00:08:38.335 [2024-12-10 05:33:56.210264] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:08:38.335 [2024-12-10 05:33:56.210309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162322 ] 00:08:38.593 [2024-12-10 05:33:56.291883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.594 [2024-12-10 05:33:56.331978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.854 Running I/O for 10 seconds... 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.854 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.854 [2024-12-10 05:33:56.760938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.854 [2024-12-10 05:33:56.761051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77810 is same with the state(6) to be set 00:08:38.855 [2024-12-10 05:33:56.761365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.855 [2024-12-10 05:33:56.761767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.855 [2024-12-10 05:33:56.761774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.761986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.761992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.856 [2024-12-10 05:33:56.762329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.856 [2024-12-10 05:33:56.762354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:08:38.856 [2024-12-10 05:33:56.763271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:38.856 task offset: 24576 on job bdev=Nvme0n1 fails 00:08:38.856 00:08:38.856 Latency(us) 00:08:38.856 [2024-12-10T04:33:56.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.856 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:38.856 Job: Nvme0n1 ended in about 0.11 seconds with error 00:08:38.856 Verification LBA range: start 0x0 length 0x400 00:08:38.857 Nvme0n1 : 0.11 1734.64 108.41 578.21 0.00 25556.54 2699.46 26713.72 00:08:38.857 [2024-12-10T04:33:56.816Z] =================================================================================================================== 00:08:38.857 [2024-12-10T04:33:56.816Z] Total : 1734.64 108.41 578.21 0.00 25556.54 2699.46 26713.72 00:08:38.857 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.857 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.857 [2024-12-10 05:33:56.765711] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.857 [2024-12-10 05:33:56.765732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2439b20 (9): Bad file descriptor 00:08:38.857 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.857 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.857 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.857 05:33:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:39.116 [2024-12-10 05:33:56.826566] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4162322 00:08:40.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4162322) - No such process 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.052 { 00:08:40.052 "params": { 00:08:40.052 "name": "Nvme$subsystem", 00:08:40.052 "trtype": "$TEST_TRANSPORT", 00:08:40.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.052 "adrfam": "ipv4", 00:08:40.052 "trsvcid": "$NVMF_PORT", 00:08:40.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.052 "hdgst": ${hdgst:-false}, 00:08:40.052 "ddgst": ${ddgst:-false} 00:08:40.052 }, 00:08:40.052 "method": "bdev_nvme_attach_controller" 00:08:40.052 } 00:08:40.052 EOF 00:08:40.052 )") 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:40.052 05:33:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.052 "params": { 00:08:40.052 "name": "Nvme0", 00:08:40.052 "trtype": "tcp", 00:08:40.052 "traddr": "10.0.0.2", 00:08:40.052 "adrfam": "ipv4", 00:08:40.052 "trsvcid": "4420", 00:08:40.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:40.052 "hdgst": false, 00:08:40.052 "ddgst": false 00:08:40.052 }, 00:08:40.052 "method": "bdev_nvme_attach_controller" 00:08:40.052 }' 00:08:40.052 [2024-12-10 05:33:57.830021] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:08:40.052 [2024-12-10 05:33:57.830067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162678 ] 00:08:40.052 [2024-12-10 05:33:57.910447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.052 [2024-12-10 05:33:57.948076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.310 Running I/O for 1 seconds... 00:08:41.688 1984.00 IOPS, 124.00 MiB/s 00:08:41.688 Latency(us) 00:08:41.688 [2024-12-10T04:33:59.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.688 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.688 Verification LBA range: start 0x0 length 0x400 00:08:41.688 Nvme0n1 : 1.01 2035.07 127.19 0.00 0.00 30957.78 4743.56 26713.72 00:08:41.688 [2024-12-10T04:33:59.647Z] =================================================================================================================== 00:08:41.688 [2024-12-10T04:33:59.647Z] Total : 2035.07 127.19 0.00 0.00 30957.78 4743.56 26713.72 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.688 rmmod nvme_tcp 00:08:41.688 rmmod nvme_fabrics 00:08:41.688 rmmod nvme_keyring 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4162163 ']' 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4162163 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4162163 ']' 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4162163 00:08:41.688 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4162163 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4162163' 00:08:41.689 killing process with pid 4162163 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4162163 00:08:41.689 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4162163 00:08:41.948 [2024-12-10 05:33:59.692678] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.948 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.853 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.853 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:44.113 00:08:44.113 real 0m13.740s 00:08:44.113 user 0m21.859s 00:08:44.113 sys 0m6.177s 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.113 ************************************ 00:08:44.113 END TEST nvmf_host_management 00:08:44.113 ************************************ 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.113 ************************************ 00:08:44.113 START TEST nvmf_lvol 00:08:44.113 ************************************ 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.113 * Looking for test storage... 00:08:44.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.113 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.113 --rc genhtml_branch_coverage=1 00:08:44.113 --rc genhtml_function_coverage=1 00:08:44.113 --rc genhtml_legend=1 00:08:44.113 --rc geninfo_all_blocks=1 00:08:44.113 --rc geninfo_unexecuted_blocks=1 00:08:44.113 00:08:44.113 ' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.113 --rc genhtml_branch_coverage=1 00:08:44.113 --rc genhtml_function_coverage=1 00:08:44.113 --rc genhtml_legend=1 00:08:44.113 --rc geninfo_all_blocks=1 00:08:44.113 --rc geninfo_unexecuted_blocks=1 00:08:44.113 00:08:44.113 ' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.113 --rc genhtml_branch_coverage=1 00:08:44.113 --rc genhtml_function_coverage=1 00:08:44.113 --rc genhtml_legend=1 00:08:44.113 --rc geninfo_all_blocks=1 00:08:44.113 --rc geninfo_unexecuted_blocks=1 00:08:44.113 00:08:44.113 ' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.113 --rc genhtml_branch_coverage=1 00:08:44.113 --rc genhtml_function_coverage=1 00:08:44.113 --rc genhtml_legend=1 00:08:44.113 --rc geninfo_all_blocks=1 00:08:44.113 --rc geninfo_unexecuted_blocks=1 00:08:44.113 00:08:44.113 ' 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.113 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.373 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:50.945 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:50.945 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:50.945 Found net devices under 0000:af:00.0: cvl_0_0 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.945 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:50.946 Found net devices under 0000:af:00.1: cvl_0_1 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:08:50.946 00:08:50.946 --- 10.0.0.2 ping statistics --- 00:08:50.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.946 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:08:50.946 00:08:50.946 --- 10.0.0.1 ping statistics --- 00:08:50.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.946 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.946 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4166933 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4166933 00:08:51.205 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4166933 ']' 00:08:51.206 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.206 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.206 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.206 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.206 05:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.206 [2024-12-10 05:34:08.989039] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:08:51.206 [2024-12-10 05:34:08.989084] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.206 [2024-12-10 05:34:09.073306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.206 [2024-12-10 05:34:09.113216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.206 [2024-12-10 05:34:09.113255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.206 [2024-12-10 05:34:09.113262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.206 [2024-12-10 05:34:09.113268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.206 [2024-12-10 05:34:09.113273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.206 [2024-12-10 05:34:09.114620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.206 [2024-12-10 05:34:09.114730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.206 [2024-12-10 05:34:09.114732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.465 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.465 [2024-12-10 05:34:09.411834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.724 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.724 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:51.724 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.983 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:51.983 05:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:52.241 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:52.500 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=31eb8fab-8333-49f2-b8cf-56d95c71f020 00:08:52.500 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31eb8fab-8333-49f2-b8cf-56d95c71f020 lvol 20 00:08:52.759 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cd25b938-2146-4137-ba72-940573404af6 00:08:52.759 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.759 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cd25b938-2146-4137-ba72-940573404af6 00:08:53.017 05:34:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:53.276 [2024-12-10 05:34:11.083233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.276 05:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.535 05:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4167225 00:08:53.535 05:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:53.535 05:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:54.474 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cd25b938-2146-4137-ba72-940573404af6 MY_SNAPSHOT 00:08:54.733 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=56e16a54-dfe0-4d56-b8e8-a8574a3120d1 00:08:54.733 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cd25b938-2146-4137-ba72-940573404af6 30 00:08:54.993 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 56e16a54-dfe0-4d56-b8e8-a8574a3120d1 MY_CLONE 00:08:55.251 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=93f3843b-1eef-4675-8069-7a49aefd2d69 00:08:55.251 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 93f3843b-1eef-4675-8069-7a49aefd2d69 00:08:55.819 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4167225 00:09:03.937 Initializing NVMe Controllers 00:09:03.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:03.937 Controller IO queue size 128, less than required. 00:09:03.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:03.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:03.938 Initialization complete. Launching workers. 00:09:03.938 ======================================================== 00:09:03.938 Latency(us) 00:09:03.938 Device Information : IOPS MiB/s Average min max 00:09:03.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12028.10 46.98 10647.08 1583.29 62816.89 00:09:03.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11916.00 46.55 10744.19 3626.57 52416.04 00:09:03.938 ======================================================== 00:09:03.938 Total : 23944.10 93.53 10695.41 1583.29 62816.89 00:09:03.938 00:09:03.938 05:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.196 05:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cd25b938-2146-4137-ba72-940573404af6 00:09:04.196 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31eb8fab-8333-49f2-b8cf-56d95c71f020 00:09:04.453 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.454 rmmod nvme_tcp 00:09:04.454 rmmod nvme_fabrics 00:09:04.454 rmmod nvme_keyring 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4166933 ']' 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4166933 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4166933 ']' 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4166933 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.454 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166933 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166933' 00:09:04.713 killing process with pid 4166933 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4166933 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4166933 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.713 05:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.250 00:09:07.250 real 0m22.846s 00:09:07.250 user 1m3.506s 00:09:07.250 sys 0m8.239s 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:07.250 ************************************ 00:09:07.250 END TEST nvmf_lvol 00:09:07.250 ************************************ 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.250 ************************************ 00:09:07.250 START TEST nvmf_lvs_grow 00:09:07.250 ************************************ 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:07.250 * Looking for test storage... 00:09:07.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.250 --rc genhtml_branch_coverage=1 00:09:07.250 --rc genhtml_function_coverage=1 00:09:07.250 --rc genhtml_legend=1 00:09:07.250 --rc geninfo_all_blocks=1 00:09:07.250 --rc geninfo_unexecuted_blocks=1 00:09:07.250 00:09:07.250 ' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.250 --rc genhtml_branch_coverage=1 00:09:07.250 --rc genhtml_function_coverage=1 00:09:07.250 --rc genhtml_legend=1 00:09:07.250 --rc geninfo_all_blocks=1 00:09:07.250 --rc geninfo_unexecuted_blocks=1 00:09:07.250 00:09:07.250 ' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.250 --rc genhtml_branch_coverage=1 00:09:07.250 --rc genhtml_function_coverage=1 00:09:07.250 --rc genhtml_legend=1 00:09:07.250 --rc geninfo_all_blocks=1 00:09:07.250 --rc geninfo_unexecuted_blocks=1 00:09:07.250 00:09:07.250 ' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.250 --rc genhtml_branch_coverage=1 00:09:07.250 --rc genhtml_function_coverage=1 00:09:07.250 --rc genhtml_legend=1 00:09:07.250 --rc geninfo_all_blocks=1 00:09:07.250 --rc geninfo_unexecuted_blocks=1 00:09:07.250 00:09:07.250 ' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.250 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.250 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.250 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.250 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.250 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.250 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.251 05:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:13.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:13.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:13.822 Found net devices under 0000:af:00.0: cvl_0_0 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:13.822 Found net devices under 0000:af:00.1: cvl_0_1 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.822 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.823 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.823 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:09:14.082 00:09:14.082 --- 10.0.0.2 ping statistics --- 00:09:14.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.082 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:09:14.082 00:09:14.082 --- 10.0.0.1 ping statistics --- 00:09:14.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.082 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4173036 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4173036 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4173036 ']' 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.082 05:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.082 [2024-12-10 05:34:31.902706] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:14.082 [2024-12-10 05:34:31.902754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.082 [2024-12-10 05:34:31.987484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.082 [2024-12-10 05:34:32.028333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.082 [2024-12-10 05:34:32.028363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.082 [2024-12-10 05:34:32.028370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.082 [2024-12-10 05:34:32.028376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.082 [2024-12-10 05:34:32.028381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.082 [2024-12-10 05:34:32.028897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.341 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.600 [2024-12-10 05:34:32.332978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.600 ************************************ 00:09:14.600 START TEST lvs_grow_clean 00:09:14.600 ************************************ 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.600 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.859 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.859 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.859 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:14.859 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:14.859 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:15.118 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:15.118 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:15.118 05:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa lvol 150 00:09:15.377 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=37e5b1f7-dfd1-4462-9c32-668a613d2d09 00:09:15.377 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.377 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.636 [2024-12-10 05:34:33.343100] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.636 [2024-12-10 05:34:33.343145] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.636 true 00:09:15.636 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:15.636 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.636 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.636 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.894 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37e5b1f7-dfd1-4462-9c32-668a613d2d09 00:09:16.152 05:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.152 [2024-12-10 05:34:34.081307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.152 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4173525 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4173525 /var/tmp/bdevperf.sock 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4173525 ']' 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.411 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.411 [2024-12-10 05:34:34.324997] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:16.411 [2024-12-10 05:34:34.325044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173525 ] 00:09:16.670 [2024-12-10 05:34:34.406133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.670 [2024-12-10 05:34:34.445225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.670 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.670 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:16.670 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:17.243 Nvme0n1 00:09:17.243 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:17.243 [ 00:09:17.243 { 00:09:17.243 "name": "Nvme0n1", 00:09:17.243 "aliases": [ 00:09:17.243 "37e5b1f7-dfd1-4462-9c32-668a613d2d09" 00:09:17.243 ], 00:09:17.243 "product_name": "NVMe disk", 00:09:17.243 "block_size": 4096, 00:09:17.243 "num_blocks": 38912, 00:09:17.243 "uuid": "37e5b1f7-dfd1-4462-9c32-668a613d2d09", 00:09:17.243 "numa_id": 1, 00:09:17.243 "assigned_rate_limits": { 00:09:17.243 "rw_ios_per_sec": 0, 00:09:17.243 "rw_mbytes_per_sec": 0, 00:09:17.243 "r_mbytes_per_sec": 0, 00:09:17.243 "w_mbytes_per_sec": 0 00:09:17.243 }, 00:09:17.243 "claimed": false, 00:09:17.243 "zoned": false, 00:09:17.243 "supported_io_types": { 00:09:17.243 "read": true, 00:09:17.243 "write": true, 00:09:17.243 "unmap": true, 00:09:17.243 "flush": true, 00:09:17.243 "reset": true, 00:09:17.243 "nvme_admin": true, 00:09:17.243 "nvme_io": true, 00:09:17.243 "nvme_io_md": false, 00:09:17.243 "write_zeroes": true, 00:09:17.243 "zcopy": false, 00:09:17.243 "get_zone_info": false, 00:09:17.243 "zone_management": false, 00:09:17.243 "zone_append": false, 00:09:17.243 "compare": true, 00:09:17.243 "compare_and_write": true, 00:09:17.243 "abort": true, 00:09:17.243 "seek_hole": false, 00:09:17.243 "seek_data": false, 00:09:17.243 "copy": true, 00:09:17.243 "nvme_iov_md": false 00:09:17.243 }, 00:09:17.243 "memory_domains": [ 00:09:17.243 { 00:09:17.243 "dma_device_id": "system", 00:09:17.243 "dma_device_type": 1 00:09:17.243 } 00:09:17.243 ], 00:09:17.243 "driver_specific": { 00:09:17.243 "nvme": [ 00:09:17.243 { 00:09:17.243 "trid": { 00:09:17.243 "trtype": "TCP", 00:09:17.243 "adrfam": "IPv4", 00:09:17.243 "traddr": "10.0.0.2", 00:09:17.243 "trsvcid": "4420", 00:09:17.243 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:17.243 }, 00:09:17.243 "ctrlr_data": { 00:09:17.243 "cntlid": 1, 00:09:17.243 "vendor_id": "0x8086", 00:09:17.243 "model_number": "SPDK bdev Controller", 00:09:17.243 "serial_number": "SPDK0", 00:09:17.243 "firmware_revision": "25.01", 00:09:17.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.243 "oacs": { 00:09:17.243 "security": 0, 00:09:17.243 "format": 0, 00:09:17.243 "firmware": 0, 00:09:17.243 "ns_manage": 0 00:09:17.243 }, 00:09:17.243 "multi_ctrlr": true, 00:09:17.243 "ana_reporting": false 00:09:17.243 }, 00:09:17.243 "vs": { 00:09:17.243 "nvme_version": "1.3" 00:09:17.243 }, 00:09:17.243 "ns_data": { 00:09:17.243 "id": 1, 00:09:17.243 "can_share": true 00:09:17.243 } 00:09:17.243 } 00:09:17.243 ], 00:09:17.243 "mp_policy": "active_passive" 00:09:17.243 } 00:09:17.243 } 00:09:17.243 ] 00:09:17.243 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4173754 00:09:17.243 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:17.243 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.501 Running I/O for 10 seconds... 00:09:18.435 Latency(us) 00:09:18.435 [2024-12-10T04:34:36.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.435 Nvme0n1 : 1.00 23406.00 91.43 0.00 0.00 0.00 0.00 0.00 00:09:18.435 [2024-12-10T04:34:36.394Z] =================================================================================================================== 00:09:18.435 [2024-12-10T04:34:36.394Z] Total : 23406.00 91.43 0.00 0.00 0.00 0.00 0.00 00:09:18.435 00:09:19.369 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:19.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.370 Nvme0n1 : 2.00 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:09:19.370 [2024-12-10T04:34:37.329Z] =================================================================================================================== 00:09:19.370 [2024-12-10T04:34:37.329Z] Total : 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:09:19.370 00:09:19.627 true 00:09:19.627 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:19.627 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:19.627 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:19.627 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:19.627 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4173754 00:09:20.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.562 Nvme0n1 : 3.00 23620.67 92.27 0.00 0.00 0.00 0.00 0.00 00:09:20.562 [2024-12-10T04:34:38.521Z] =================================================================================================================== 00:09:20.562 [2024-12-10T04:34:38.521Z] Total : 23620.67 92.27 0.00 0.00 0.00 0.00 0.00 00:09:20.562 00:09:21.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.496 Nvme0n1 : 4.00 23669.75 92.46 0.00 0.00 0.00 0.00 0.00 00:09:21.496 [2024-12-10T04:34:39.455Z] =================================================================================================================== 00:09:21.496 [2024-12-10T04:34:39.455Z] Total : 23669.75 92.46 0.00 0.00 0.00 0.00 0.00 00:09:21.496 00:09:22.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.430 Nvme0n1 : 5.00 23690.60 92.54 0.00 0.00 0.00 0.00 0.00 00:09:22.430 [2024-12-10T04:34:40.389Z] =================================================================================================================== 00:09:22.430 [2024-12-10T04:34:40.389Z] Total : 23690.60 92.54 0.00 0.00 0.00 0.00 0.00 00:09:22.430 00:09:23.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.365 Nvme0n1 : 6.00 23725.33 92.68 0.00 0.00 0.00 0.00 0.00 00:09:23.365 [2024-12-10T04:34:41.324Z] =================================================================================================================== 00:09:23.365 [2024-12-10T04:34:41.324Z] Total : 23725.33 92.68 0.00 0.00 0.00 0.00 0.00 00:09:23.365 00:09:24.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.741 Nvme0n1 : 7.00 23754.86 92.79 0.00 0.00 0.00 0.00 0.00 00:09:24.741 [2024-12-10T04:34:42.700Z] =================================================================================================================== 00:09:24.741 [2024-12-10T04:34:42.700Z] Total : 23754.86 92.79 0.00 0.00 0.00 0.00 0.00 00:09:24.741 00:09:25.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.677 Nvme0n1 : 8.00 23778.62 92.89 0.00 0.00 0.00 0.00 0.00 00:09:25.677 [2024-12-10T04:34:43.636Z] =================================================================================================================== 00:09:25.677 [2024-12-10T04:34:43.636Z] Total : 23778.62 92.89 0.00 0.00 0.00 0.00 0.00 00:09:25.677 00:09:26.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.612 Nvme0n1 : 9.00 23803.89 92.98 0.00 0.00 0.00 0.00 0.00 00:09:26.612 [2024-12-10T04:34:44.571Z] =================================================================================================================== 00:09:26.612 [2024-12-10T04:34:44.571Z] Total : 23803.89 92.98 0.00 0.00 0.00 0.00 0.00 00:09:26.612 00:09:27.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.548 Nvme0n1 : 10.00 23818.50 93.04 0.00 0.00 0.00 0.00 0.00 00:09:27.548 [2024-12-10T04:34:45.507Z] =================================================================================================================== 00:09:27.548 [2024-12-10T04:34:45.507Z] Total : 23818.50 93.04 0.00 0.00 0.00 0.00 0.00 00:09:27.548 00:09:27.548 00:09:27.548 Latency(us) 00:09:27.548 [2024-12-10T04:34:45.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.548 Nvme0n1 : 10.00 23823.26 93.06 0.00 0.00 5369.84 2902.31 11671.65 00:09:27.548 [2024-12-10T04:34:45.507Z] =================================================================================================================== 00:09:27.548 [2024-12-10T04:34:45.507Z] Total : 23823.26 93.06 0.00 0.00 5369.84 2902.31 11671.65 00:09:27.548 { 00:09:27.548 "results": [ 00:09:27.548 { 00:09:27.548 "job": "Nvme0n1", 00:09:27.548 "core_mask": "0x2", 00:09:27.548 "workload": "randwrite", 00:09:27.548 "status": "finished", 00:09:27.548 "queue_depth": 128, 00:09:27.548 "io_size": 4096, 00:09:27.548 "runtime": 10.003374, 00:09:27.548 "iops": 23823.262031390608, 00:09:27.548 "mibps": 93.05961731011956, 00:09:27.548 "io_failed": 0, 00:09:27.548 "io_timeout": 0, 00:09:27.548 "avg_latency_us": 5369.83609280552, 00:09:27.548 "min_latency_us": 2902.308571428571, 00:09:27.548 "max_latency_us": 11671.649523809523 00:09:27.548 } 00:09:27.548 ], 00:09:27.548 "core_count": 1 00:09:27.548 } 00:09:27.548 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4173525 00:09:27.548 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4173525 ']' 00:09:27.548 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4173525 00:09:27.548 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:27.548 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.549 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4173525 00:09:27.549 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:27.549 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:27.549 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4173525' 00:09:27.549 killing process with pid 4173525 00:09:27.549 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4173525 00:09:27.549 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.549 00:09:27.549 Latency(us) 00:09:27.549 [2024-12-10T04:34:45.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.549 [2024-12-10T04:34:45.508Z] =================================================================================================================== 00:09:27.549 [2024-12-10T04:34:45.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.549 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4173525 00:09:27.808 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.808 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.067 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:28.067 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:28.325 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:28.325 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:28.325 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.584 [2024-12-10 05:34:46.299942] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:28.584 request: 00:09:28.584 { 00:09:28.584 "uuid": "fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa", 00:09:28.584 "method": "bdev_lvol_get_lvstores", 00:09:28.584 "req_id": 1 00:09:28.584 } 00:09:28.584 Got JSON-RPC error response 00:09:28.584 response: 00:09:28.584 { 00:09:28.584 "code": -19, 00:09:28.584 "message": "No such device" 00:09:28.584 } 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.584 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.585 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.585 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.844 aio_bdev 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 37e5b1f7-dfd1-4462-9c32-668a613d2d09 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=37e5b1f7-dfd1-4462-9c32-668a613d2d09 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.844 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.103 05:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 37e5b1f7-dfd1-4462-9c32-668a613d2d09 -t 2000 00:09:29.362 [ 00:09:29.362 { 00:09:29.362 "name": "37e5b1f7-dfd1-4462-9c32-668a613d2d09", 00:09:29.362 "aliases": [ 00:09:29.362 "lvs/lvol" 00:09:29.362 ], 00:09:29.362 "product_name": "Logical Volume", 00:09:29.362 "block_size": 4096, 00:09:29.362 "num_blocks": 38912, 00:09:29.362 "uuid": "37e5b1f7-dfd1-4462-9c32-668a613d2d09", 00:09:29.362 "assigned_rate_limits": { 00:09:29.362 "rw_ios_per_sec": 0, 00:09:29.362 "rw_mbytes_per_sec": 0, 00:09:29.362 "r_mbytes_per_sec": 0, 00:09:29.362 "w_mbytes_per_sec": 0 00:09:29.362 }, 00:09:29.362 "claimed": false, 00:09:29.362 "zoned": false, 00:09:29.362 "supported_io_types": { 00:09:29.362 "read": true, 00:09:29.362 "write": true, 00:09:29.362 "unmap": true, 00:09:29.362 "flush": false, 00:09:29.362 "reset": true, 00:09:29.362 "nvme_admin": false, 00:09:29.362 "nvme_io": false, 00:09:29.362 "nvme_io_md": false, 00:09:29.362 "write_zeroes": true, 00:09:29.362 "zcopy": false, 00:09:29.362 "get_zone_info": false, 00:09:29.362 "zone_management": false, 00:09:29.362 "zone_append": false, 00:09:29.362 "compare": false, 00:09:29.362 "compare_and_write": false, 00:09:29.362 "abort": false, 00:09:29.362 "seek_hole": true, 00:09:29.362 "seek_data": true, 00:09:29.362 "copy": false, 00:09:29.362 "nvme_iov_md": false 00:09:29.362 }, 00:09:29.362 "driver_specific": { 00:09:29.362 "lvol": { 00:09:29.362 "lvol_store_uuid": "fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa", 00:09:29.362 "base_bdev": "aio_bdev", 00:09:29.362 "thin_provision": false, 00:09:29.362 "num_allocated_clusters": 38, 00:09:29.362 "snapshot": false, 00:09:29.362 "clone": false, 00:09:29.362 "esnap_clone": false 00:09:29.362 } 00:09:29.362 } 00:09:29.362 } 00:09:29.362 ] 00:09:29.362 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:29.362 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:29.362 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:29.362 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:29.362 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:29.362 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:29.621 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:29.621 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37e5b1f7-dfd1-4462-9c32-668a613d2d09 00:09:29.880 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd6aea6b-21f5-428f-bd9f-f6814ca1ddaa 00:09:30.140 05:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.140 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.140 00:09:30.140 real 0m15.696s 00:09:30.140 user 0m15.196s 00:09:30.140 sys 0m1.550s 00:09:30.140 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.140 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.140 ************************************ 00:09:30.140 END TEST lvs_grow_clean 00:09:30.140 ************************************ 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.398 ************************************ 00:09:30.398 START TEST lvs_grow_dirty 00:09:30.398 ************************************ 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.398 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.657 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:30.657 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:30.657 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:30.657 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:30.657 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:30.916 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:30.916 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:30.916 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 lvol 150 00:09:31.209 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:31.209 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.209 05:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:31.209 [2024-12-10 05:34:49.094113] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:31.209 [2024-12-10 05:34:49.094161] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:31.209 true 00:09:31.209 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:31.209 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:31.561 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:31.561 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:31.561 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:31.820 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:32.078 [2024-12-10 05:34:49.844336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.078 05:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4176197 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4176197 /var/tmp/bdevperf.sock 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4176197 ']' 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:32.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.336 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 [2024-12-10 05:34:50.103437] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:32.336 [2024-12-10 05:34:50.103487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176197 ] 00:09:32.336 [2024-12-10 05:34:50.182232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.336 [2024-12-10 05:34:50.223256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.595 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.595 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:32.595 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:32.853 Nvme0n1 00:09:32.853 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:33.112 [ 00:09:33.112 { 00:09:33.112 "name": "Nvme0n1", 00:09:33.112 "aliases": [ 00:09:33.112 "aa0c0fbc-b51d-4192-bd54-53e8700f1331" 00:09:33.112 ], 00:09:33.112 "product_name": "NVMe disk", 00:09:33.112 "block_size": 4096, 00:09:33.112 "num_blocks": 38912, 00:09:33.112 "uuid": "aa0c0fbc-b51d-4192-bd54-53e8700f1331", 00:09:33.112 "numa_id": 1, 00:09:33.112 "assigned_rate_limits": { 00:09:33.112 "rw_ios_per_sec": 0, 00:09:33.112 "rw_mbytes_per_sec": 0, 00:09:33.112 "r_mbytes_per_sec": 0, 00:09:33.112 "w_mbytes_per_sec": 0 00:09:33.112 }, 00:09:33.112 "claimed": false, 00:09:33.112 "zoned": false, 00:09:33.112 "supported_io_types": { 00:09:33.112 "read": true, 00:09:33.112 "write": true, 00:09:33.112 "unmap": true, 00:09:33.112 "flush": true, 00:09:33.112 "reset": true, 00:09:33.112 "nvme_admin": true, 00:09:33.112 "nvme_io": true, 00:09:33.112 "nvme_io_md": false, 00:09:33.112 "write_zeroes": true, 00:09:33.112 "zcopy": false, 00:09:33.112 "get_zone_info": false, 00:09:33.112 "zone_management": false, 00:09:33.112 "zone_append": false, 00:09:33.112 "compare": true, 00:09:33.112 "compare_and_write": true, 00:09:33.112 "abort": true, 00:09:33.112 "seek_hole": false, 00:09:33.112 "seek_data": false, 00:09:33.112 "copy": true, 00:09:33.112 "nvme_iov_md": false 00:09:33.112 }, 00:09:33.112 "memory_domains": [ 00:09:33.112 { 00:09:33.112 "dma_device_id": "system", 00:09:33.112 "dma_device_type": 1 00:09:33.112 } 00:09:33.112 ], 00:09:33.112 "driver_specific": { 00:09:33.112 "nvme": [ 00:09:33.112 { 00:09:33.112 "trid": { 00:09:33.112 "trtype": "TCP", 00:09:33.112 "adrfam": "IPv4", 00:09:33.112 "traddr": "10.0.0.2", 00:09:33.112 "trsvcid": "4420", 00:09:33.112 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:33.112 }, 00:09:33.112 "ctrlr_data": { 00:09:33.112 "cntlid": 1, 00:09:33.112 "vendor_id": "0x8086", 00:09:33.112 "model_number": "SPDK bdev Controller", 00:09:33.112 "serial_number": "SPDK0", 00:09:33.112 "firmware_revision": "25.01", 00:09:33.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:33.112 "oacs": { 00:09:33.112 "security": 0, 00:09:33.112 "format": 0, 00:09:33.112 "firmware": 0, 00:09:33.112 "ns_manage": 0 00:09:33.112 }, 00:09:33.112 "multi_ctrlr": true, 00:09:33.112 "ana_reporting": false 00:09:33.112 }, 00:09:33.112 "vs": { 00:09:33.112 "nvme_version": "1.3" 00:09:33.112 }, 00:09:33.112 "ns_data": { 00:09:33.112 "id": 1, 00:09:33.112 "can_share": true 00:09:33.112 } 00:09:33.112 } 00:09:33.112 ], 00:09:33.112 "mp_policy": "active_passive" 00:09:33.112 } 00:09:33.112 } 00:09:33.112 ] 00:09:33.112 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4176337 00:09:33.112 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:33.112 05:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:33.112 Running I/O for 10 seconds... 00:09:34.488 Latency(us) 00:09:34.488 [2024-12-10T04:34:52.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.489 Nvme0n1 : 1.00 23378.00 91.32 0.00 0.00 0.00 0.00 0.00 00:09:34.489 [2024-12-10T04:34:52.448Z] =================================================================================================================== 00:09:34.489 [2024-12-10T04:34:52.448Z] Total : 23378.00 91.32 0.00 0.00 0.00 0.00 0.00 00:09:34.489 00:09:35.057 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:35.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.316 Nvme0n1 : 2.00 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:09:35.316 [2024-12-10T04:34:53.275Z] =================================================================================================================== 00:09:35.316 [2024-12-10T04:34:53.275Z] Total : 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:09:35.316 00:09:35.316 true 00:09:35.316 05:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:35.316 05:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:35.574 05:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:35.574 05:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:35.574 05:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4176337 00:09:36.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.142 Nvme0n1 : 3.00 23655.67 92.40 0.00 0.00 0.00 0.00 0.00 00:09:36.142 [2024-12-10T04:34:54.101Z] =================================================================================================================== 00:09:36.142 [2024-12-10T04:34:54.101Z] Total : 23655.67 92.40 0.00 0.00 0.00 0.00 0.00 00:09:36.142 00:09:37.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.517 Nvme0n1 : 4.00 23734.00 92.71 0.00 0.00 0.00 0.00 0.00 00:09:37.517 [2024-12-10T04:34:55.476Z] =================================================================================================================== 00:09:37.517 [2024-12-10T04:34:55.476Z] Total : 23734.00 92.71 0.00 0.00 0.00 0.00 0.00 00:09:37.517 00:09:38.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.453 Nvme0n1 : 5.00 23789.00 92.93 0.00 0.00 0.00 0.00 0.00 00:09:38.453 [2024-12-10T04:34:56.412Z] =================================================================================================================== 00:09:38.453 [2024-12-10T04:34:56.412Z] Total : 23789.00 92.93 0.00 0.00 0.00 0.00 0.00 00:09:38.453 00:09:39.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.388 Nvme0n1 : 6.00 23827.33 93.08 0.00 0.00 0.00 0.00 0.00 00:09:39.388 [2024-12-10T04:34:57.347Z] =================================================================================================================== 00:09:39.388 [2024-12-10T04:34:57.347Z] Total : 23827.33 93.08 0.00 0.00 0.00 0.00 0.00 00:09:39.388 00:09:40.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.323 Nvme0n1 : 7.00 23865.14 93.22 0.00 0.00 0.00 0.00 0.00 00:09:40.323 [2024-12-10T04:34:58.282Z] =================================================================================================================== 00:09:40.324 [2024-12-10T04:34:58.283Z] Total : 23865.14 93.22 0.00 0.00 0.00 0.00 0.00 00:09:40.324 00:09:41.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.259 Nvme0n1 : 8.00 23884.50 93.30 0.00 0.00 0.00 0.00 0.00 00:09:41.259 [2024-12-10T04:34:59.218Z] =================================================================================================================== 00:09:41.259 [2024-12-10T04:34:59.218Z] Total : 23884.50 93.30 0.00 0.00 0.00 0.00 0.00 00:09:41.259 00:09:42.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.194 Nvme0n1 : 9.00 23885.67 93.30 0.00 0.00 0.00 0.00 0.00 00:09:42.194 [2024-12-10T04:35:00.153Z] =================================================================================================================== 00:09:42.194 [2024-12-10T04:35:00.153Z] Total : 23885.67 93.30 0.00 0.00 0.00 0.00 0.00 00:09:42.194 00:09:43.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.130 Nvme0n1 : 10.00 23869.10 93.24 0.00 0.00 0.00 0.00 0.00 00:09:43.130 [2024-12-10T04:35:01.089Z] =================================================================================================================== 00:09:43.130 [2024-12-10T04:35:01.089Z] Total : 23869.10 93.24 0.00 0.00 0.00 0.00 0.00 00:09:43.130 00:09:43.130 00:09:43.130 Latency(us) 00:09:43.130 [2024-12-10T04:35:01.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.130 Nvme0n1 : 10.00 23875.49 93.26 0.00 0.00 5358.13 3214.38 12607.88 00:09:43.130 [2024-12-10T04:35:01.089Z] =================================================================================================================== 00:09:43.130 [2024-12-10T04:35:01.089Z] Total : 23875.49 93.26 0.00 0.00 5358.13 3214.38 12607.88 00:09:43.130 { 00:09:43.130 "results": [ 00:09:43.130 { 00:09:43.130 "job": "Nvme0n1", 00:09:43.130 "core_mask": "0x2", 00:09:43.130 "workload": "randwrite", 00:09:43.130 "status": "finished", 00:09:43.130 "queue_depth": 128, 00:09:43.130 "io_size": 4096, 00:09:43.130 "runtime": 10.002684, 00:09:43.130 "iops": 23875.49181799605, 00:09:43.130 "mibps": 93.26363991404708, 00:09:43.130 "io_failed": 0, 00:09:43.130 "io_timeout": 0, 00:09:43.130 "avg_latency_us": 5358.134007986522, 00:09:43.130 "min_latency_us": 3214.384761904762, 00:09:43.130 "max_latency_us": 12607.878095238095 00:09:43.130 } 00:09:43.130 ], 00:09:43.130 "core_count": 1 00:09:43.130 } 00:09:43.130 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4176197 00:09:43.130 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4176197 ']' 00:09:43.130 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4176197 00:09:43.390 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4176197 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4176197' 00:09:43.391 killing process with pid 4176197 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4176197 00:09:43.391 Received shutdown signal, test time was about 10.000000 seconds 00:09:43.391 00:09:43.391 Latency(us) 00:09:43.391 [2024-12-10T04:35:01.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.391 [2024-12-10T04:35:01.350Z] =================================================================================================================== 00:09:43.391 [2024-12-10T04:35:01.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4176197 00:09:43.391 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.648 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.907 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.907 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4173036 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4173036 00:09:44.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4173036 Killed "${NVMF_APP[@]}" "$@" 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4178160 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4178160 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4178160 ']' 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.166 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 [2024-12-10 05:35:01.994784] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:44.166 [2024-12-10 05:35:01.994828] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.166 [2024-12-10 05:35:02.078309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.166 [2024-12-10 05:35:02.117119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.166 [2024-12-10 05:35:02.117152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.166 [2024-12-10 05:35:02.117159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.166 [2024-12-10 05:35:02.117165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.166 [2024-12-10 05:35:02.117170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.166 [2024-12-10 05:35:02.117719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.425 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.684 [2024-12-10 05:35:02.419747] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:44.684 [2024-12-10 05:35:02.419838] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:44.684 [2024-12-10 05:35:02.419863] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.684 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.944 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa0c0fbc-b51d-4192-bd54-53e8700f1331 -t 2000 00:09:44.944 [ 00:09:44.944 { 00:09:44.944 "name": "aa0c0fbc-b51d-4192-bd54-53e8700f1331", 00:09:44.944 "aliases": [ 00:09:44.944 "lvs/lvol" 00:09:44.944 ], 00:09:44.944 "product_name": "Logical Volume", 00:09:44.944 "block_size": 4096, 00:09:44.944 "num_blocks": 38912, 00:09:44.944 "uuid": "aa0c0fbc-b51d-4192-bd54-53e8700f1331", 00:09:44.944 "assigned_rate_limits": { 00:09:44.944 "rw_ios_per_sec": 0, 00:09:44.944 "rw_mbytes_per_sec": 0, 00:09:44.944 "r_mbytes_per_sec": 0, 00:09:44.944 "w_mbytes_per_sec": 0 00:09:44.944 }, 00:09:44.944 "claimed": false, 00:09:44.944 "zoned": false, 00:09:44.944 "supported_io_types": { 00:09:44.944 "read": true, 00:09:44.944 "write": true, 00:09:44.944 "unmap": true, 00:09:44.944 "flush": false, 00:09:44.944 "reset": true, 00:09:44.944 "nvme_admin": false, 00:09:44.944 "nvme_io": false, 00:09:44.944 "nvme_io_md": false, 00:09:44.944 "write_zeroes": true, 00:09:44.944 "zcopy": false, 00:09:44.944 "get_zone_info": false, 00:09:44.944 "zone_management": false, 00:09:44.944 "zone_append": false, 00:09:44.944 "compare": false, 00:09:44.944 "compare_and_write": false, 00:09:44.944 "abort": false, 00:09:44.944 "seek_hole": true, 00:09:44.944 "seek_data": true, 00:09:44.944 "copy": false, 00:09:44.944 "nvme_iov_md": false 00:09:44.944 }, 00:09:44.944 "driver_specific": { 00:09:44.944 "lvol": { 00:09:44.944 "lvol_store_uuid": "ddbacfd5-5040-48ba-8063-b534f4a6e2c3", 00:09:44.944 "base_bdev": "aio_bdev", 00:09:44.944 "thin_provision": false, 00:09:44.944 "num_allocated_clusters": 38, 00:09:44.944 "snapshot": false, 00:09:44.944 "clone": false, 00:09:44.944 "esnap_clone": false 00:09:44.944 } 00:09:44.944 } 00:09:44.944 } 00:09:44.944 ] 00:09:44.944 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:44.944 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:44.944 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:45.202 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:45.202 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:45.202 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:45.462 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:45.462 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.462 [2024-12-10 05:35:03.388653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:45.721 request: 00:09:45.721 { 00:09:45.721 "uuid": "ddbacfd5-5040-48ba-8063-b534f4a6e2c3", 00:09:45.721 "method": "bdev_lvol_get_lvstores", 00:09:45.721 "req_id": 1 00:09:45.721 } 00:09:45.721 Got JSON-RPC error response 00:09:45.721 response: 00:09:45.721 { 00:09:45.721 "code": -19, 00:09:45.721 "message": "No such device" 00:09:45.721 } 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.721 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.980 aio_bdev 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.980 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:46.239 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa0c0fbc-b51d-4192-bd54-53e8700f1331 -t 2000 00:09:46.239 [ 00:09:46.239 { 00:09:46.239 "name": "aa0c0fbc-b51d-4192-bd54-53e8700f1331", 00:09:46.239 "aliases": [ 00:09:46.239 "lvs/lvol" 00:09:46.239 ], 00:09:46.239 "product_name": "Logical Volume", 00:09:46.239 "block_size": 4096, 00:09:46.239 "num_blocks": 38912, 00:09:46.239 "uuid": "aa0c0fbc-b51d-4192-bd54-53e8700f1331", 00:09:46.239 "assigned_rate_limits": { 00:09:46.239 "rw_ios_per_sec": 0, 00:09:46.239 "rw_mbytes_per_sec": 0, 00:09:46.239 "r_mbytes_per_sec": 0, 00:09:46.239 "w_mbytes_per_sec": 0 00:09:46.239 }, 00:09:46.239 "claimed": false, 00:09:46.239 "zoned": false, 00:09:46.239 "supported_io_types": { 00:09:46.239 "read": true, 00:09:46.239 "write": true, 00:09:46.239 "unmap": true, 00:09:46.239 "flush": false, 00:09:46.239 "reset": true, 00:09:46.239 "nvme_admin": false, 00:09:46.239 "nvme_io": false, 00:09:46.239 "nvme_io_md": false, 00:09:46.239 "write_zeroes": true, 00:09:46.239 "zcopy": false, 00:09:46.239 "get_zone_info": false, 00:09:46.239 "zone_management": false, 00:09:46.239 "zone_append": false, 00:09:46.239 "compare": false, 00:09:46.239 "compare_and_write": false, 00:09:46.239 "abort": false, 00:09:46.239 "seek_hole": true, 00:09:46.239 "seek_data": true, 00:09:46.239 "copy": false, 00:09:46.239 "nvme_iov_md": false 00:09:46.239 }, 00:09:46.239 "driver_specific": { 00:09:46.239 "lvol": { 00:09:46.239 "lvol_store_uuid": "ddbacfd5-5040-48ba-8063-b534f4a6e2c3", 00:09:46.239 "base_bdev": "aio_bdev", 00:09:46.239 "thin_provision": false, 00:09:46.239 "num_allocated_clusters": 38, 00:09:46.239 "snapshot": false, 00:09:46.239 "clone": false, 00:09:46.239 "esnap_clone": false 00:09:46.239 } 00:09:46.239 } 00:09:46.239 } 00:09:46.239 ] 00:09:46.239 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:46.239 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:46.239 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:46.498 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:46.498 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:46.498 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:46.757 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:46.757 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa0c0fbc-b51d-4192-bd54-53e8700f1331 00:09:46.757 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ddbacfd5-5040-48ba-8063-b534f4a6e2c3 00:09:47.016 05:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:47.274 00:09:47.274 real 0m16.996s 00:09:47.274 user 0m43.946s 00:09:47.274 sys 0m3.852s 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.274 ************************************ 00:09:47.274 END TEST lvs_grow_dirty 00:09:47.274 ************************************ 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:47.274 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:47.275 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:47.275 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:47.275 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:47.275 nvmf_trace.0 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.533 rmmod nvme_tcp 00:09:47.533 rmmod nvme_fabrics 00:09:47.533 rmmod nvme_keyring 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4178160 ']' 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4178160 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4178160 ']' 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4178160 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4178160 00:09:47.533 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.534 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.534 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4178160' 00:09:47.534 killing process with pid 4178160 00:09:47.534 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4178160 00:09:47.534 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4178160 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.792 05:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.697 00:09:49.697 real 0m42.787s 00:09:49.697 user 1m4.983s 00:09:49.697 sys 0m10.975s 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:49.697 ************************************ 00:09:49.697 END TEST nvmf_lvs_grow 00:09:49.697 ************************************ 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.697 05:35:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.957 ************************************ 00:09:49.957 START TEST nvmf_bdev_io_wait 00:09:49.957 ************************************ 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:49.957 * Looking for test storage... 00:09:49.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.957 --rc genhtml_branch_coverage=1 00:09:49.957 --rc genhtml_function_coverage=1 00:09:49.957 --rc genhtml_legend=1 00:09:49.957 --rc geninfo_all_blocks=1 00:09:49.957 --rc geninfo_unexecuted_blocks=1 00:09:49.957 00:09:49.957 ' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.957 --rc genhtml_branch_coverage=1 00:09:49.957 --rc genhtml_function_coverage=1 00:09:49.957 --rc genhtml_legend=1 00:09:49.957 --rc geninfo_all_blocks=1 00:09:49.957 --rc geninfo_unexecuted_blocks=1 00:09:49.957 00:09:49.957 ' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.957 --rc genhtml_branch_coverage=1 00:09:49.957 --rc genhtml_function_coverage=1 00:09:49.957 --rc genhtml_legend=1 00:09:49.957 --rc geninfo_all_blocks=1 00:09:49.957 --rc geninfo_unexecuted_blocks=1 00:09:49.957 00:09:49.957 ' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.957 --rc genhtml_branch_coverage=1 00:09:49.957 --rc genhtml_function_coverage=1 00:09:49.957 --rc genhtml_legend=1 00:09:49.957 --rc geninfo_all_blocks=1 00:09:49.957 --rc geninfo_unexecuted_blocks=1 00:09:49.957 00:09:49.957 ' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.957 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.958 05:35:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.529 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.529 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.529 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.529 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.529 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.530 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.530 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.530 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.530 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.530 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:09:56.789 00:09:56.789 --- 10.0.0.2 ping statistics --- 00:09:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.789 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:56.789 00:09:56.789 --- 10.0.0.1 ping statistics --- 00:09:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.789 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4182688 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4182688 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4182688 ']' 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.789 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.048 [2024-12-10 05:35:14.758635] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:57.048 [2024-12-10 05:35:14.758683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.048 [2024-12-10 05:35:14.843427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.048 [2024-12-10 05:35:14.885561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.048 [2024-12-10 05:35:14.885597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.048 [2024-12-10 05:35:14.885604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.048 [2024-12-10 05:35:14.885610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.048 [2024-12-10 05:35:14.885615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.048 [2024-12-10 05:35:14.887066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.048 [2024-12-10 05:35:14.887172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.048 [2024-12-10 05:35:14.887281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.048 [2024-12-10 05:35:14.887281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.048 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 [2024-12-10 05:35:15.023100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 Malloc0 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 [2024-12-10 05:35:15.074173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4182813 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4182816 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.315 { 00:09:57.315 "params": { 00:09:57.315 "name": "Nvme$subsystem", 00:09:57.315 "trtype": "$TEST_TRANSPORT", 00:09:57.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.315 "adrfam": "ipv4", 00:09:57.315 "trsvcid": "$NVMF_PORT", 00:09:57.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.315 "hdgst": ${hdgst:-false}, 00:09:57.315 "ddgst": ${ddgst:-false} 00:09:57.315 }, 00:09:57.315 "method": "bdev_nvme_attach_controller" 00:09:57.315 } 00:09:57.315 EOF 00:09:57.315 )") 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4182819 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.315 { 00:09:57.315 "params": { 00:09:57.315 "name": "Nvme$subsystem", 00:09:57.315 "trtype": "$TEST_TRANSPORT", 00:09:57.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.315 "adrfam": "ipv4", 00:09:57.315 "trsvcid": "$NVMF_PORT", 00:09:57.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.315 "hdgst": ${hdgst:-false}, 00:09:57.315 "ddgst": ${ddgst:-false} 00:09:57.315 }, 00:09:57.315 "method": "bdev_nvme_attach_controller" 00:09:57.315 } 00:09:57.315 EOF 00:09:57.315 )") 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4182823 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:57.315 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.315 { 00:09:57.315 "params": { 00:09:57.315 "name": "Nvme$subsystem", 00:09:57.315 "trtype": "$TEST_TRANSPORT", 00:09:57.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.315 "adrfam": "ipv4", 00:09:57.315 "trsvcid": "$NVMF_PORT", 00:09:57.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.315 "hdgst": ${hdgst:-false}, 00:09:57.315 "ddgst": ${ddgst:-false} 00:09:57.315 }, 00:09:57.315 "method": "bdev_nvme_attach_controller" 00:09:57.316 } 00:09:57.316 EOF 00:09:57.316 )") 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.316 { 00:09:57.316 "params": { 00:09:57.316 "name": "Nvme$subsystem", 00:09:57.316 "trtype": "$TEST_TRANSPORT", 00:09:57.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.316 "adrfam": "ipv4", 00:09:57.316 "trsvcid": "$NVMF_PORT", 00:09:57.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.316 "hdgst": ${hdgst:-false}, 00:09:57.316 "ddgst": ${ddgst:-false} 00:09:57.316 }, 00:09:57.316 "method": "bdev_nvme_attach_controller" 00:09:57.316 } 00:09:57.316 EOF 00:09:57.316 )") 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4182813 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.316 "params": { 00:09:57.316 "name": "Nvme1", 00:09:57.316 "trtype": "tcp", 00:09:57.316 "traddr": "10.0.0.2", 00:09:57.316 "adrfam": "ipv4", 00:09:57.316 "trsvcid": "4420", 00:09:57.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.316 "hdgst": false, 00:09:57.316 "ddgst": false 00:09:57.316 }, 00:09:57.316 "method": "bdev_nvme_attach_controller" 00:09:57.316 }' 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.316 "params": { 00:09:57.316 "name": "Nvme1", 00:09:57.316 "trtype": "tcp", 00:09:57.316 "traddr": "10.0.0.2", 00:09:57.316 "adrfam": "ipv4", 00:09:57.316 "trsvcid": "4420", 00:09:57.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.316 "hdgst": false, 00:09:57.316 "ddgst": false 00:09:57.316 }, 00:09:57.316 "method": "bdev_nvme_attach_controller" 00:09:57.316 }' 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.316 "params": { 00:09:57.316 "name": "Nvme1", 00:09:57.316 "trtype": "tcp", 00:09:57.316 "traddr": "10.0.0.2", 00:09:57.316 "adrfam": "ipv4", 00:09:57.316 "trsvcid": "4420", 00:09:57.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.316 "hdgst": false, 00:09:57.316 "ddgst": false 00:09:57.316 }, 00:09:57.316 "method": "bdev_nvme_attach_controller" 00:09:57.316 }' 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:57.316 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.316 "params": { 00:09:57.316 "name": "Nvme1", 00:09:57.316 "trtype": "tcp", 00:09:57.316 "traddr": "10.0.0.2", 00:09:57.316 "adrfam": "ipv4", 00:09:57.316 "trsvcid": "4420", 00:09:57.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.316 "hdgst": false, 00:09:57.316 "ddgst": false 00:09:57.316 }, 00:09:57.316 "method": "bdev_nvme_attach_controller" 00:09:57.316 }' 00:09:57.316 [2024-12-10 05:35:15.127914] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:57.316 [2024-12-10 05:35:15.127958] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:57.316 [2024-12-10 05:35:15.131137] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:57.316 [2024-12-10 05:35:15.131138] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:57.316 [2024-12-10 05:35:15.131193] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 05:35:15.131194] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:57.316 --proc-type=auto ] 00:09:57.316 [2024-12-10 05:35:15.132664] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:09:57.316 [2024-12-10 05:35:15.132707] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:57.578 [2024-12-10 05:35:15.294601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.578 [2024-12-10 05:35:15.331261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.578 [2024-12-10 05:35:15.387957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.578 [2024-12-10 05:35:15.432659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.578 [2024-12-10 05:35:15.488640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.837 [2024-12-10 05:35:15.533172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:57.837 [2024-12-10 05:35:15.592979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.837 [2024-12-10 05:35:15.647456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.837 Running I/O for 1 seconds... 00:09:58.095 Running I/O for 1 seconds... 00:09:58.095 Running I/O for 1 seconds... 00:09:58.095 Running I/O for 1 seconds... 00:09:59.031 13738.00 IOPS, 53.66 MiB/s 00:09:59.031 Latency(us) 00:09:59.031 [2024-12-10T04:35:16.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.031 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:59.031 Nvme1n1 : 1.01 13796.90 53.89 0.00 0.00 9251.04 4369.07 14043.43 00:09:59.031 [2024-12-10T04:35:16.990Z] =================================================================================================================== 00:09:59.031 [2024-12-10T04:35:16.990Z] Total : 13796.90 53.89 0.00 0.00 9251.04 4369.07 14043.43 00:09:59.031 243696.00 IOPS, 951.94 MiB/s 00:09:59.031 Latency(us) 00:09:59.031 [2024-12-10T04:35:16.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.031 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:59.031 Nvme1n1 : 1.00 243330.97 950.51 0.00 0.00 523.55 220.40 1497.97 00:09:59.031 [2024-12-10T04:35:16.990Z] =================================================================================================================== 00:09:59.031 [2024-12-10T04:35:16.990Z] Total : 243330.97 950.51 0.00 0.00 523.55 220.40 1497.97 00:09:59.031 10155.00 IOPS, 39.67 MiB/s 00:09:59.031 Latency(us) 00:09:59.031 [2024-12-10T04:35:16.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.031 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:59.031 Nvme1n1 : 1.01 10223.25 39.93 0.00 0.00 12468.21 5398.92 22469.49 00:09:59.031 [2024-12-10T04:35:16.990Z] =================================================================================================================== 00:09:59.031 [2024-12-10T04:35:16.990Z] Total : 10223.25 39.93 0.00 0.00 12468.21 5398.92 22469.49 00:09:59.031 9414.00 IOPS, 36.77 MiB/s 00:09:59.031 Latency(us) 00:09:59.031 [2024-12-10T04:35:16.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.031 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:59.031 Nvme1n1 : 1.01 9475.87 37.02 0.00 0.00 13456.51 6085.49 25215.76 00:09:59.031 [2024-12-10T04:35:16.990Z] =================================================================================================================== 00:09:59.031 [2024-12-10T04:35:16.990Z] Total : 9475.87 37.02 0.00 0.00 13456.51 6085.49 25215.76 00:09:59.031 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4182816 00:09:59.290 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4182819 00:09:59.290 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4182823 00:09:59.290 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.290 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.290 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.290 rmmod nvme_tcp 00:09:59.290 rmmod nvme_fabrics 00:09:59.290 rmmod nvme_keyring 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4182688 ']' 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4182688 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4182688 ']' 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4182688 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4182688 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4182688' 00:09:59.290 killing process with pid 4182688 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4182688 00:09:59.290 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4182688 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.549 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.455 00:10:01.455 real 0m11.681s 00:10:01.455 user 0m16.684s 00:10:01.455 sys 0m6.937s 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.455 ************************************ 00:10:01.455 END TEST nvmf_bdev_io_wait 00:10:01.455 ************************************ 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.455 05:35:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.715 ************************************ 00:10:01.715 START TEST nvmf_queue_depth 00:10:01.715 ************************************ 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:01.715 * Looking for test storage... 00:10:01.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.715 --rc genhtml_branch_coverage=1 00:10:01.715 --rc genhtml_function_coverage=1 00:10:01.715 --rc genhtml_legend=1 00:10:01.715 --rc geninfo_all_blocks=1 00:10:01.715 --rc geninfo_unexecuted_blocks=1 00:10:01.715 00:10:01.715 ' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.715 --rc genhtml_branch_coverage=1 00:10:01.715 --rc genhtml_function_coverage=1 00:10:01.715 --rc genhtml_legend=1 00:10:01.715 --rc geninfo_all_blocks=1 00:10:01.715 --rc geninfo_unexecuted_blocks=1 00:10:01.715 00:10:01.715 ' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.715 --rc genhtml_branch_coverage=1 00:10:01.715 --rc genhtml_function_coverage=1 00:10:01.715 --rc genhtml_legend=1 00:10:01.715 --rc geninfo_all_blocks=1 00:10:01.715 --rc geninfo_unexecuted_blocks=1 00:10:01.715 00:10:01.715 ' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.715 --rc genhtml_branch_coverage=1 00:10:01.715 --rc genhtml_function_coverage=1 00:10:01.715 --rc genhtml_legend=1 00:10:01.715 --rc geninfo_all_blocks=1 00:10:01.715 --rc geninfo_unexecuted_blocks=1 00:10:01.715 00:10:01.715 ' 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:01.715 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.716 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.286 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:08.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:08.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:08.287 Found net devices under 0000:af:00.0: cvl_0_0 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:08.287 Found net devices under 0000:af:00.1: cvl_0_1 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.287 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:10:08.546 00:10:08.546 --- 10.0.0.2 ping statistics --- 00:10:08.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.546 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:10:08.546 00:10:08.546 --- 10.0.0.1 ping statistics --- 00:10:08.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.546 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.546 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4187194 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4187194 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4187194 ']' 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.805 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.805 [2024-12-10 05:35:26.568871] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:10:08.805 [2024-12-10 05:35:26.568921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.805 [2024-12-10 05:35:26.653284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.805 [2024-12-10 05:35:26.690879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.805 [2024-12-10 05:35:26.690917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.805 [2024-12-10 05:35:26.690923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.805 [2024-12-10 05:35:26.690929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.805 [2024-12-10 05:35:26.690933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.805 [2024-12-10 05:35:26.691463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 [2024-12-10 05:35:27.440700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 Malloc0 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 [2024-12-10 05:35:27.487041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4187303 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4187303 /var/tmp/bdevperf.sock 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4187303 ']' 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.740 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.740 [2024-12-10 05:35:27.536486] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:10:09.740 [2024-12-10 05:35:27.536532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4187303 ] 00:10:09.740 [2024-12-10 05:35:27.617326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.741 [2024-12-10 05:35:27.658615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.999 NVMe0n1 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.999 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:10.258 Running I/O for 10 seconds... 00:10:12.131 12279.00 IOPS, 47.96 MiB/s [2024-12-10T04:35:31.466Z] 12293.00 IOPS, 48.02 MiB/s [2024-12-10T04:35:32.403Z] 12462.00 IOPS, 48.68 MiB/s [2024-12-10T04:35:33.340Z] 12537.25 IOPS, 48.97 MiB/s [2024-12-10T04:35:34.277Z] 12502.80 IOPS, 48.84 MiB/s [2024-12-10T04:35:35.214Z] 12569.50 IOPS, 49.10 MiB/s [2024-12-10T04:35:36.150Z] 12575.43 IOPS, 49.12 MiB/s [2024-12-10T04:35:37.086Z] 12642.62 IOPS, 49.39 MiB/s [2024-12-10T04:35:38.464Z] 12658.56 IOPS, 49.45 MiB/s [2024-12-10T04:35:38.464Z] 12677.70 IOPS, 49.52 MiB/s 00:10:20.505 Latency(us) 00:10:20.505 [2024-12-10T04:35:38.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.505 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:20.505 Verification LBA range: start 0x0 length 0x4000 00:10:20.505 NVMe0n1 : 10.06 12704.91 49.63 0.00 0.00 80347.45 18849.40 52179.14 00:10:20.505 [2024-12-10T04:35:38.464Z] =================================================================================================================== 00:10:20.505 [2024-12-10T04:35:38.464Z] Total : 12704.91 49.63 0.00 0.00 80347.45 18849.40 52179.14 00:10:20.505 { 00:10:20.505 "results": [ 00:10:20.505 { 00:10:20.505 "job": "NVMe0n1", 00:10:20.505 "core_mask": "0x1", 00:10:20.505 "workload": "verify", 00:10:20.505 "status": "finished", 00:10:20.505 "verify_range": { 00:10:20.505 "start": 0, 00:10:20.505 "length": 16384 00:10:20.505 }, 00:10:20.505 "queue_depth": 1024, 00:10:20.505 "io_size": 4096, 00:10:20.505 "runtime": 10.058633, 00:10:20.505 "iops": 12704.907316928653, 00:10:20.505 "mibps": 49.62854420675255, 00:10:20.505 "io_failed": 0, 00:10:20.505 "io_timeout": 0, 00:10:20.505 "avg_latency_us": 80347.44603954132, 00:10:20.505 "min_latency_us": 18849.401904761904, 00:10:20.505 "max_latency_us": 52179.13904761905 00:10:20.505 } 00:10:20.505 ], 00:10:20.505 "core_count": 1 00:10:20.505 } 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4187303 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4187303 ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4187303 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187303 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187303' 00:10:20.505 killing process with pid 4187303 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4187303 00:10:20.505 Received shutdown signal, test time was about 10.000000 seconds 00:10:20.505 00:10:20.505 Latency(us) 00:10:20.505 [2024-12-10T04:35:38.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.505 [2024-12-10T04:35:38.464Z] =================================================================================================================== 00:10:20.505 [2024-12-10T04:35:38.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4187303 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.505 rmmod nvme_tcp 00:10:20.505 rmmod nvme_fabrics 00:10:20.505 rmmod nvme_keyring 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4187194 ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4187194 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4187194 ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4187194 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.505 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187194 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187194' 00:10:20.790 killing process with pid 4187194 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4187194 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4187194 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.790 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.857 00:10:22.857 real 0m21.308s 00:10:22.857 user 0m24.177s 00:10:22.857 sys 0m6.697s 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.857 ************************************ 00:10:22.857 END TEST nvmf_queue_depth 00:10:22.857 ************************************ 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.857 ************************************ 00:10:22.857 START TEST nvmf_target_multipath 00:10:22.857 ************************************ 00:10:22.857 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:23.117 * Looking for test storage... 00:10:23.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.117 --rc genhtml_branch_coverage=1 00:10:23.117 --rc genhtml_function_coverage=1 00:10:23.117 --rc genhtml_legend=1 00:10:23.117 --rc geninfo_all_blocks=1 00:10:23.117 --rc geninfo_unexecuted_blocks=1 00:10:23.117 00:10:23.117 ' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.117 --rc genhtml_branch_coverage=1 00:10:23.117 --rc genhtml_function_coverage=1 00:10:23.117 --rc genhtml_legend=1 00:10:23.117 --rc geninfo_all_blocks=1 00:10:23.117 --rc geninfo_unexecuted_blocks=1 00:10:23.117 00:10:23.117 ' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.117 --rc genhtml_branch_coverage=1 00:10:23.117 --rc genhtml_function_coverage=1 00:10:23.117 --rc genhtml_legend=1 00:10:23.117 --rc geninfo_all_blocks=1 00:10:23.117 --rc geninfo_unexecuted_blocks=1 00:10:23.117 00:10:23.117 ' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.117 --rc genhtml_branch_coverage=1 00:10:23.117 --rc genhtml_function_coverage=1 00:10:23.117 --rc genhtml_legend=1 00:10:23.117 --rc geninfo_all_blocks=1 00:10:23.117 --rc geninfo_unexecuted_blocks=1 00:10:23.117 00:10:23.117 ' 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.117 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.118 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.118 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.118 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.118 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.118 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:29.687 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:29.687 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:29.687 Found net devices under 0000:af:00.0: cvl_0_0 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:29.687 Found net devices under 0000:af:00.1: cvl_0_1 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.687 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.688 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:10:29.947 00:10:29.947 --- 10.0.0.2 ping statistics --- 00:10:29.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.947 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:29.947 00:10:29.947 --- 10.0.0.1 ping statistics --- 00:10:29.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.947 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:29.947 only one NIC for nvmf test 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.947 rmmod nvme_tcp 00:10:29.947 rmmod nvme_fabrics 00:10:29.947 rmmod nvme_keyring 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.947 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:32.485 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.486 00:10:32.486 real 0m9.208s 00:10:32.486 user 0m2.095s 00:10:32.486 sys 0m5.142s 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.486 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:32.486 ************************************ 00:10:32.486 END TEST nvmf_target_multipath 00:10:32.486 ************************************ 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.486 ************************************ 00:10:32.486 START TEST nvmf_zcopy 00:10:32.486 ************************************ 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.486 * Looking for test storage... 00:10:32.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.486 --rc genhtml_branch_coverage=1 00:10:32.486 --rc genhtml_function_coverage=1 00:10:32.486 --rc genhtml_legend=1 00:10:32.486 --rc geninfo_all_blocks=1 00:10:32.486 --rc geninfo_unexecuted_blocks=1 00:10:32.486 00:10:32.486 ' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.486 --rc genhtml_branch_coverage=1 00:10:32.486 --rc genhtml_function_coverage=1 00:10:32.486 --rc genhtml_legend=1 00:10:32.486 --rc geninfo_all_blocks=1 00:10:32.486 --rc geninfo_unexecuted_blocks=1 00:10:32.486 00:10:32.486 ' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.486 --rc genhtml_branch_coverage=1 00:10:32.486 --rc genhtml_function_coverage=1 00:10:32.486 --rc genhtml_legend=1 00:10:32.486 --rc geninfo_all_blocks=1 00:10:32.486 --rc geninfo_unexecuted_blocks=1 00:10:32.486 00:10:32.486 ' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.486 --rc genhtml_branch_coverage=1 00:10:32.486 --rc genhtml_function_coverage=1 00:10:32.486 --rc genhtml_legend=1 00:10:32.486 --rc geninfo_all_blocks=1 00:10:32.486 --rc geninfo_unexecuted_blocks=1 00:10:32.486 00:10:32.486 ' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.486 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.487 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:39.057 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:39.057 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:39.057 Found net devices under 0000:af:00.0: cvl_0_0 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:39.057 Found net devices under 0000:af:00.1: cvl_0_1 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.057 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.058 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.058 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.058 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.058 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.058 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.058 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.058 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.058 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:10:39.317 00:10:39.317 --- 10.0.0.2 ping statistics --- 00:10:39.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.317 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:39.317 00:10:39.317 --- 10.0.0.1 ping statistics --- 00:10:39.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.317 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3542 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3542 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3542 ']' 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.317 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.317 [2024-12-10 05:35:57.122913] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:10:39.317 [2024-12-10 05:35:57.122961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.317 [2024-12-10 05:35:57.194307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.317 [2024-12-10 05:35:57.235123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.317 [2024-12-10 05:35:57.235156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.317 [2024-12-10 05:35:57.235163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.317 [2024-12-10 05:35:57.235169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.317 [2024-12-10 05:35:57.235175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.317 [2024-12-10 05:35:57.235691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 [2024-12-10 05:35:57.375729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 [2024-12-10 05:35:57.395910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 malloc0 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.577 { 00:10:39.577 "params": { 00:10:39.577 "name": "Nvme$subsystem", 00:10:39.577 "trtype": "$TEST_TRANSPORT", 00:10:39.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.577 "adrfam": "ipv4", 00:10:39.577 "trsvcid": "$NVMF_PORT", 00:10:39.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.577 "hdgst": ${hdgst:-false}, 00:10:39.577 "ddgst": ${ddgst:-false} 00:10:39.577 }, 00:10:39.577 "method": "bdev_nvme_attach_controller" 00:10:39.577 } 00:10:39.577 EOF 00:10:39.577 )") 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:39.577 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.577 "params": { 00:10:39.577 "name": "Nvme1", 00:10:39.577 "trtype": "tcp", 00:10:39.577 "traddr": "10.0.0.2", 00:10:39.577 "adrfam": "ipv4", 00:10:39.577 "trsvcid": "4420", 00:10:39.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.577 "hdgst": false, 00:10:39.577 "ddgst": false 00:10:39.577 }, 00:10:39.577 "method": "bdev_nvme_attach_controller" 00:10:39.577 }' 00:10:39.577 [2024-12-10 05:35:57.479708] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:10:39.577 [2024-12-10 05:35:57.479748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3564 ] 00:10:39.836 [2024-12-10 05:35:57.558251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.836 [2024-12-10 05:35:57.597605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.836 Running I/O for 10 seconds... 00:10:42.150 8734.00 IOPS, 68.23 MiB/s [2024-12-10T04:36:01.045Z] 8796.50 IOPS, 68.72 MiB/s [2024-12-10T04:36:01.979Z] 8755.67 IOPS, 68.40 MiB/s [2024-12-10T04:36:02.915Z] 8767.50 IOPS, 68.50 MiB/s [2024-12-10T04:36:03.850Z] 8792.80 IOPS, 68.69 MiB/s [2024-12-10T04:36:04.786Z] 8807.83 IOPS, 68.81 MiB/s [2024-12-10T04:36:06.161Z] 8817.71 IOPS, 68.89 MiB/s [2024-12-10T04:36:07.096Z] 8819.12 IOPS, 68.90 MiB/s [2024-12-10T04:36:08.031Z] 8825.22 IOPS, 68.95 MiB/s [2024-12-10T04:36:08.031Z] 8830.40 IOPS, 68.99 MiB/s 00:10:50.072 Latency(us) 00:10:50.072 [2024-12-10T04:36:08.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.072 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:50.072 Verification LBA range: start 0x0 length 0x1000 00:10:50.072 Nvme1n1 : 10.01 8831.59 69.00 0.00 0.00 14451.42 538.33 23218.47 00:10:50.072 [2024-12-10T04:36:08.031Z] =================================================================================================================== 00:10:50.072 [2024-12-10T04:36:08.031Z] Total : 8831.59 69.00 0.00 0.00 14451.42 538.33 23218.47 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=5919 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.072 { 00:10:50.072 "params": { 00:10:50.072 "name": "Nvme$subsystem", 00:10:50.072 "trtype": "$TEST_TRANSPORT", 00:10:50.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.072 "adrfam": "ipv4", 00:10:50.072 "trsvcid": "$NVMF_PORT", 00:10:50.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.072 "hdgst": ${hdgst:-false}, 00:10:50.072 "ddgst": ${ddgst:-false} 00:10:50.072 }, 00:10:50.072 "method": "bdev_nvme_attach_controller" 00:10:50.072 } 00:10:50.072 EOF 00:10:50.072 )") 00:10:50.072 [2024-12-10 05:36:07.947986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:07.948017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:50.072 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.072 "params": { 00:10:50.072 "name": "Nvme1", 00:10:50.072 "trtype": "tcp", 00:10:50.072 "traddr": "10.0.0.2", 00:10:50.072 "adrfam": "ipv4", 00:10:50.072 "trsvcid": "4420", 00:10:50.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.072 "hdgst": false, 00:10:50.072 "ddgst": false 00:10:50.072 }, 00:10:50.072 "method": "bdev_nvme_attach_controller" 00:10:50.072 }' 00:10:50.072 [2024-12-10 05:36:07.959987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:07.960002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.072 [2024-12-10 05:36:07.972019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:07.972032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.072 [2024-12-10 05:36:07.984051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:07.984062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.072 [2024-12-10 05:36:07.987882] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:10:50.072 [2024-12-10 05:36:07.987924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5919 ] 00:10:50.072 [2024-12-10 05:36:07.996080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:07.996091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.072 [2024-12-10 05:36:08.008114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:08.008124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.072 [2024-12-10 05:36:08.020144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.072 [2024-12-10 05:36:08.020155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.330 [2024-12-10 05:36:08.032177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.330 [2024-12-10 05:36:08.032188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.330 [2024-12-10 05:36:08.044209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.330 [2024-12-10 05:36:08.044226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.330 [2024-12-10 05:36:08.056242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.330 [2024-12-10 05:36:08.056268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.330 [2024-12-10 05:36:08.065333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.331 [2024-12-10 05:36:08.068293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.068304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.080321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.080336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.092350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.092362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.104383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.104396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.105364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.331 [2024-12-10 05:36:08.116426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.116443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.128453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.128472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.140481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.140494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.152511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.152523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.164556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.164578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.176574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.176585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.188603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.188613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.200648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.200668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.212678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.212694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.224713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.224729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.236743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.236758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.248773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.248785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.260806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.260820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.331 [2024-12-10 05:36:08.272842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.331 [2024-12-10 05:36:08.272852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.589 [2024-12-10 05:36:08.284878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.589 [2024-12-10 05:36:08.284893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.589 [2024-12-10 05:36:08.296905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.589 [2024-12-10 05:36:08.296915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.589 [2024-12-10 05:36:08.308937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.589 [2024-12-10 05:36:08.308946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.589 [2024-12-10 05:36:08.320973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.589 [2024-12-10 05:36:08.320987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.589 [2024-12-10 05:36:08.333004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.589 [2024-12-10 05:36:08.333014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.589 [2024-12-10 05:36:08.345039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.345049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.357071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.357081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.369110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.369123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.381146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.381163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 Running I/O for 5 seconds... 00:10:50.590 [2024-12-10 05:36:08.393171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.393182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.405805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.405825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.416284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.416304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.430832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.430850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.442021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.442041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.456108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.456128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.469739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.469758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.483653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.483673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.497396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.497415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.511130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.511149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.525305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.525324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.590 [2024-12-10 05:36:08.539067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.590 [2024-12-10 05:36:08.539086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.553188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.553208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.567128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.567147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.580697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.580717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.594257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.594276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.607907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.607926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.621540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.621560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.635364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.635389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.648657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.648676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.662170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.662189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.676212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.676236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.689776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.689796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.703527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.703547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.717545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.717565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.731186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.731205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.745163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.745182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.758576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.758595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.772284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.772303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.785825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.785844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.849 [2024-12-10 05:36:08.800050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.849 [2024-12-10 05:36:08.800070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.813800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.813818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.827677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.827696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.841657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.841675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.855509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.855527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.869189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.869208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.882900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.882919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.896238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.896258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.910175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.910195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.923408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.923427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.937153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.937172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.951041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.951061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.964598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.964618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.978062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.978082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:08.991843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:08.991863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:09.006113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:09.006134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:09.020355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:09.020375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:09.031373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:09.031393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:09.045486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:09.045506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.109 [2024-12-10 05:36:09.059193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.109 [2024-12-10 05:36:09.059216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.072812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.072831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.086357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.086376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.100012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.100031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.113468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.113488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.127379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.127399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.140562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.140581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.154410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.154430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.168131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.168151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.181923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.181944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.195598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.195618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.209300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.209320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.223525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.223546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.236818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.236836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.250995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.251016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.264758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.264782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.278881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.278900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.292649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.292670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.306140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.306159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.368 [2024-12-10 05:36:09.320133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.368 [2024-12-10 05:36:09.320155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.333750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.333769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.347262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.347281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.360725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.360744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.373926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.373944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.387770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.387789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 16982.00 IOPS, 132.67 MiB/s [2024-12-10T04:36:09.586Z] [2024-12-10 05:36:09.401742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.401761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.415750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.415768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.426576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.426594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.440609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.440627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.453837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.453856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.467447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.467469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.480880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.480899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.494546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.494565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.508750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.508769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.522726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.522751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.536812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.536831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.550621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.550640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.564735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.564755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.627 [2024-12-10 05:36:09.578586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.627 [2024-12-10 05:36:09.578606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.589954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.589973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.604199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.604224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.617787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.617806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.631487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.631506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.644778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.644797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.658193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.658212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.671593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.671612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.685431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.685450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.698899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.698919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.712502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.712522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.726021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.726041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.739679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.739698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.753412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.753431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.767408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.767427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.781585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.781608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.795313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.886 [2024-12-10 05:36:09.795332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.886 [2024-12-10 05:36:09.808702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.887 [2024-12-10 05:36:09.808721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.887 [2024-12-10 05:36:09.822277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.887 [2024-12-10 05:36:09.822296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.887 [2024-12-10 05:36:09.835494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.887 [2024-12-10 05:36:09.835513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.849165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.849183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.862557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.862576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.876207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.876232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.889837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.889856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.903418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.903438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.917371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.917391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.931133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.931152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 05:36:09.945107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 05:36:09.945126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:09.958616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:09.958635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:09.972309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:09.972328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:09.985739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:09.985758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:09.999173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:09.999192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.012904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.012924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.027639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.027660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.041498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.041518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.054811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.054830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.068653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.068672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.082782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.082802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.146 [2024-12-10 05:36:10.096428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.146 [2024-12-10 05:36:10.096448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.110435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.110454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.123943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.123962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.138122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.138143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.148665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.148685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.162780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.162800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.176472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.176492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.190486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.190505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.204446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.204466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.218162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.218181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.232094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.232113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.245573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.245593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.259442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.259461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.273089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.273108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.287013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.287032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.300517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.300536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.314236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.314255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.327915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.327934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.341877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.341896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.405 [2024-12-10 05:36:10.355667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.405 [2024-12-10 05:36:10.355687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.369383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.369404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.383175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.383196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.397174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.397194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 17016.50 IOPS, 132.94 MiB/s [2024-12-10T04:36:10.623Z] [2024-12-10 05:36:10.411398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.411418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.425224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.425245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.438686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.438706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.452772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.452792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.467016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.467036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.480466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.480486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.494355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.494374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.508766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.508785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.524609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.524629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.538951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.538971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.549754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.549775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.563850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.563870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.577253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.577273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.591184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.591205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.664 [2024-12-10 05:36:10.604732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.664 [2024-12-10 05:36:10.604753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.618366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.618386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.632074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.632094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.645482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.645502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.659000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.659020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.672771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.672790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.686421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.686451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.699888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.699907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.713495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.713515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.726984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.727003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.740574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.740593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.754075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.754094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.767766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.767786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.781273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.781292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.794904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.794923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.808484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.808508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.822352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.822371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.835947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 05:36:10.835966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 05:36:10.849852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.924 [2024-12-10 05:36:10.849870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.924 [2024-12-10 05:36:10.863597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.924 [2024-12-10 05:36:10.863616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.877453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.877473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.891192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.891211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.904778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.904797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.918282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.918301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.932343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.932364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.946180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.946201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.959741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.959760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.973215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.973240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:10.986666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:10.986685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.000519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:11.000538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.014472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:11.014491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.028025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:11.028045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.041602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:11.041621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.055299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:11.055317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.068680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 05:36:11.068703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 05:36:11.082160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.183 [2024-12-10 05:36:11.082180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.183 [2024-12-10 05:36:11.095530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.183 [2024-12-10 05:36:11.095549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.183 [2024-12-10 05:36:11.108665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.183 [2024-12-10 05:36:11.108684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.183 [2024-12-10 05:36:11.122101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.183 [2024-12-10 05:36:11.122121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.183 [2024-12-10 05:36:11.135803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.183 [2024-12-10 05:36:11.135822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.149746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.149765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.163713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.163732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.177328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.177347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.190521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.190541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.204368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.204387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.218080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.218099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.232209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.232235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.245601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.245620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.259820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.259839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.273473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.273492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.287065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.287085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.300722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.300741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.314814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.314834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.328434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.328457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.342153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.342172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.355752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.355771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.369244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.369263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 05:36:11.382587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 05:36:11.382606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.396326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.396345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 17039.00 IOPS, 133.12 MiB/s [2024-12-10T04:36:11.659Z] [2024-12-10 05:36:11.409744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.409762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.423184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.423203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.436557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.436576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.450034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.450053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.463555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.463574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.477057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.477076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.490853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.490872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.504498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.504518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.517966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.517985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.531464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.531483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.545298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.545318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.558851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.558871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.572477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.572496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.586076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.586100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.599672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.599692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.613589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.613608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.627355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.627374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.700 [2024-12-10 05:36:11.641417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.700 [2024-12-10 05:36:11.641437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.655126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.655145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.668802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.668821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.682381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.682400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.695766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.695785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.709354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.709373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.722756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.722775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.736863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.736883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.750427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.750447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.764430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.764450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.778448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.778468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.792400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.792420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.806215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.806240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.819858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.819877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.833407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.833426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.847306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.847325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.860840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.860859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.874515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.874535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.888644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.888663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.959 [2024-12-10 05:36:11.902433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.959 [2024-12-10 05:36:11.902452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.916102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.916122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.929846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.929865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.943414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.943434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.956676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.956695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.969991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.970011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.983441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.983462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:11.997201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:11.997229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.011231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.011252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.024520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.024540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.038484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.038504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.052347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.052366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.065746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.065765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.079657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.079677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.093331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.093350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.107089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.107109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.121066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.121085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.134452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.218 [2024-12-10 05:36:12.134470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.218 [2024-12-10 05:36:12.147813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.219 [2024-12-10 05:36:12.147832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.219 [2024-12-10 05:36:12.161591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.219 [2024-12-10 05:36:12.161610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.174933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.174953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.188977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.188996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.202345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.202365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.216041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.216060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.229903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.229923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.243779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.243798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.257273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.257292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.271196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.271222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.284545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.284564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.297667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.297686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.311338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.311357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.324880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.324899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.338602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.338620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.352138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.352161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.365728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.365746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.379421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.379440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.392757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.392776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 17089.00 IOPS, 133.51 MiB/s [2024-12-10T04:36:12.437Z] [2024-12-10 05:36:12.406416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.406435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-12-10 05:36:12.420170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-12-10 05:36:12.420189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.433932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.433952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.447715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.447734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.461225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.461243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.474594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.474611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.488233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.488251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.501990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.502009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.515749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.515768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.529296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.529316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.542957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.542977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.556410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.556429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.570368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.570387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.583959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.583978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.597414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.597433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.611098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.611121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.625007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.625026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.638958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.638977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-12-10 05:36:12.653009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-12-10 05:36:12.653028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-12-10 05:36:12.666671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-12-10 05:36:12.666690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-12-10 05:36:12.680404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-12-10 05:36:12.680422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.694473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.694496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.707543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.707562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.721165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.721184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.735226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.735246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.748848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.748867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.762066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.762085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.775801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-12-10 05:36:12.775820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 [2024-12-10 05:36:12.789108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.789126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.802801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.802820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.816599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.816618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.830496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.830515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.844177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.844196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.857883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.857903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.871778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.871804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.885345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.885364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.899049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.899069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.912816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.912835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.926700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.926719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.940337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.940356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.002 [2024-12-10 05:36:12.954171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.002 [2024-12-10 05:36:12.954190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:12.967878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:12.967896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:12.981657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:12.981677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:12.995161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:12.995181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.008911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.008930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.022756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.022776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.036534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.036554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.050181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.050201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.063980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.063999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.077784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.077802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.091892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.091911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.105788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.105809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.119680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.119699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.133869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.133895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-12-10 05:36:13.147634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-12-10 05:36:13.147655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.262 [2024-12-10 05:36:13.160906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.262 [2024-12-10 05:36:13.160925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.262 [2024-12-10 05:36:13.174547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.262 [2024-12-10 05:36:13.174568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.262 [2024-12-10 05:36:13.188754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.262 [2024-12-10 05:36:13.188773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.262 [2024-12-10 05:36:13.199782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.262 [2024-12-10 05:36:13.199802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.262 [2024-12-10 05:36:13.214587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.262 [2024-12-10 05:36:13.214607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-12-10 05:36:13.228716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-12-10 05:36:13.228737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-12-10 05:36:13.239156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-12-10 05:36:13.239176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-12-10 05:36:13.252972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-12-10 05:36:13.252992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-12-10 05:36:13.266600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.266620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.280477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.280496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.293914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.293933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.307771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.307791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.321561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.321581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.335451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.335470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.349163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.349184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.363332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.363351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.374103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.374122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.388690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.388709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.402157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.402176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 17089.60 IOPS, 133.51 MiB/s 00:10:55.521 Latency(us) 00:10:55.521 [2024-12-10T04:36:13.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.521 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:55.521 Nvme1n1 : 5.01 17097.19 133.57 0.00 0.00 7480.56 2886.70 18724.57 00:10:55.521 [2024-12-10T04:36:13.480Z] =================================================================================================================== 00:10:55.521 [2024-12-10T04:36:13.480Z] Total : 17097.19 133.57 0.00 0.00 7480.56 2886.70 18724.57 00:10:55.521 [2024-12-10 05:36:13.412111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.412129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.424158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.424173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.436202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.436224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.448236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.448254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.460266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.460281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.521 [2024-12-10 05:36:13.472299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.521 [2024-12-10 05:36:13.472315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.484328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.484342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.496356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.496370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.508387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.508401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.520420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.520443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.532465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.532475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.544487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.544500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.556513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.556524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-12-10 05:36:13.568547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-12-10 05:36:13.568557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (5919) - No such process 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 5919 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:55.780 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.781 delay0 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.781 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:56.039 [2024-12-10 05:36:13.768350] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:04.156 Initializing NVMe Controllers 00:11:04.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:04.156 Initialization complete. Launching workers. 00:11:04.156 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5801 00:11:04.156 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6084, failed to submit 37 00:11:04.156 success 5901, unsuccessful 183, failed 0 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.156 rmmod nvme_tcp 00:11:04.156 rmmod nvme_fabrics 00:11:04.156 rmmod nvme_keyring 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3542 ']' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3542 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3542 ']' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3542 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542' 00:11:04.156 killing process with pid 3542 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3542 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3542 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.156 05:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.095 00:11:05.095 real 0m32.856s 00:11:05.095 user 0m42.978s 00:11:05.095 sys 0m12.242s 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.095 ************************************ 00:11:05.095 END TEST nvmf_zcopy 00:11:05.095 ************************************ 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.095 05:36:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.095 ************************************ 00:11:05.095 START TEST nvmf_nmic 00:11:05.095 ************************************ 00:11:05.095 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:05.355 * Looking for test storage... 00:11:05.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.355 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.355 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.355 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.356 --rc genhtml_branch_coverage=1 00:11:05.356 --rc genhtml_function_coverage=1 00:11:05.356 --rc genhtml_legend=1 00:11:05.356 --rc geninfo_all_blocks=1 00:11:05.356 --rc geninfo_unexecuted_blocks=1 00:11:05.356 00:11:05.356 ' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.356 --rc genhtml_branch_coverage=1 00:11:05.356 --rc genhtml_function_coverage=1 00:11:05.356 --rc genhtml_legend=1 00:11:05.356 --rc geninfo_all_blocks=1 00:11:05.356 --rc geninfo_unexecuted_blocks=1 00:11:05.356 00:11:05.356 ' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.356 --rc genhtml_branch_coverage=1 00:11:05.356 --rc genhtml_function_coverage=1 00:11:05.356 --rc genhtml_legend=1 00:11:05.356 --rc geninfo_all_blocks=1 00:11:05.356 --rc geninfo_unexecuted_blocks=1 00:11:05.356 00:11:05.356 ' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.356 --rc genhtml_branch_coverage=1 00:11:05.356 --rc genhtml_function_coverage=1 00:11:05.356 --rc genhtml_legend=1 00:11:05.356 --rc geninfo_all_blocks=1 00:11:05.356 --rc geninfo_unexecuted_blocks=1 00:11:05.356 00:11:05.356 ' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.356 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.357 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:11.938 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.938 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:11.939 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:11.939 Found net devices under 0000:af:00.0: cvl_0_0 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:11.939 Found net devices under 0000:af:00.1: cvl_0_1 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.939 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.198 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.198 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.198 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:12.198 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:12.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:11:12.198 00:11:12.198 --- 10.0.0.2 ping statistics --- 00:11:12.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.199 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:11:12.199 00:11:12.199 --- 10.0.0.1 ping statistics --- 00:11:12.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.199 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=11874 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 11874 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 11874 ']' 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.199 05:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.199 [2024-12-10 05:36:30.040123] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:11:12.199 [2024-12-10 05:36:30.040172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.199 [2024-12-10 05:36:30.126930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.458 [2024-12-10 05:36:30.170295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.458 [2024-12-10 05:36:30.170329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.458 [2024-12-10 05:36:30.170336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.458 [2024-12-10 05:36:30.170342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.458 [2024-12-10 05:36:30.170348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.458 [2024-12-10 05:36:30.171886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.458 [2024-12-10 05:36:30.171927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.458 [2024-12-10 05:36:30.171953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.458 [2024-12-10 05:36:30.171953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.025 [2024-12-10 05:36:30.923573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.025 Malloc0 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.025 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.284 [2024-12-10 05:36:30.993441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.284 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:13.284 test case1: single bdev can't be used in multiple subsystems 00:11:13.285 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:13.285 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.285 05:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.285 [2024-12-10 05:36:31.021343] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:13.285 [2024-12-10 05:36:31.021364] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:13.285 [2024-12-10 05:36:31.021371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.285 request: 00:11:13.285 { 00:11:13.285 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:13.285 "namespace": { 00:11:13.285 "bdev_name": "Malloc0", 00:11:13.285 "no_auto_visible": false, 00:11:13.285 "hide_metadata": false 00:11:13.285 }, 00:11:13.285 "method": "nvmf_subsystem_add_ns", 00:11:13.285 "req_id": 1 00:11:13.285 } 00:11:13.285 Got JSON-RPC error response 00:11:13.285 response: 00:11:13.285 { 00:11:13.285 "code": -32602, 00:11:13.285 "message": "Invalid parameters" 00:11:13.285 } 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:13.285 Adding namespace failed - expected result. 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:13.285 test case2: host connect to nvmf target in multiple paths 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.285 [2024-12-10 05:36:31.033476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.285 05:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.224 05:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:15.600 05:36:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.600 05:36:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:15.600 05:36:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.600 05:36:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:15.600 05:36:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:17.502 05:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:17.503 [global] 00:11:17.503 thread=1 00:11:17.503 invalidate=1 00:11:17.503 rw=write 00:11:17.503 time_based=1 00:11:17.503 runtime=1 00:11:17.503 ioengine=libaio 00:11:17.503 direct=1 00:11:17.503 bs=4096 00:11:17.503 iodepth=1 00:11:17.503 norandommap=0 00:11:17.503 numjobs=1 00:11:17.503 00:11:17.503 verify_dump=1 00:11:17.503 verify_backlog=512 00:11:17.503 verify_state_save=0 00:11:17.503 do_verify=1 00:11:17.503 verify=crc32c-intel 00:11:17.503 [job0] 00:11:17.503 filename=/dev/nvme0n1 00:11:17.503 Could not set queue depth (nvme0n1) 00:11:17.765 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.765 fio-3.35 00:11:17.765 Starting 1 thread 00:11:19.149 00:11:19.149 job0: (groupid=0, jobs=1): err= 0: pid=12994: Tue Dec 10 05:36:36 2024 00:11:19.149 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:11:19.149 slat (nsec): min=10391, max=23830, avg=22861.00, stdev=2798.36 00:11:19.149 clat (usec): min=40557, max=41125, avg=40950.66, stdev=118.87 00:11:19.150 lat (usec): min=40568, max=41148, avg=40973.52, stdev=120.87 00:11:19.150 clat percentiles (usec): 00:11:19.150 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:11:19.150 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:19.150 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:19.150 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:19.150 | 99.99th=[41157] 00:11:19.150 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:11:19.150 slat (usec): min=10, max=20990, avg=52.90, stdev=927.12 00:11:19.150 clat (usec): min=115, max=293, avg=147.74, stdev=18.89 00:11:19.150 lat (usec): min=127, max=21257, avg=200.64, stdev=932.62 00:11:19.150 clat percentiles (usec): 00:11:19.150 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 126], 00:11:19.150 | 30.00th=[ 133], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:11:19.150 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 169], 00:11:19.150 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 293], 99.95th=[ 293], 00:11:19.150 | 99.99th=[ 293] 00:11:19.150 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.150 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.150 lat (usec) : 250=95.51%, 500=0.37% 00:11:19.150 lat (msec) : 50=4.12% 00:11:19.150 cpu : usr=0.69%, sys=0.59%, ctx=538, majf=0, minf=1 00:11:19.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.150 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.150 00:11:19.150 Run status group 0 (all jobs): 00:11:19.150 READ: bw=87.1KiB/s (89.2kB/s), 87.1KiB/s-87.1KiB/s (89.2kB/s-89.2kB/s), io=88.0KiB (90.1kB), run=1010-1010msec 00:11:19.150 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:11:19.150 00:11:19.150 Disk stats (read/write): 00:11:19.150 nvme0n1: ios=45/512, merge=0/0, ticks=1764/69, in_queue=1833, util=98.30% 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.150 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.150 rmmod nvme_tcp 00:11:19.150 rmmod nvme_fabrics 00:11:19.150 rmmod nvme_keyring 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 11874 ']' 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 11874 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 11874 ']' 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 11874 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.150 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 11874 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 11874' 00:11:19.414 killing process with pid 11874 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 11874 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 11874 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.414 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.023 00:11:22.023 real 0m16.376s 00:11:22.023 user 0m35.957s 00:11:22.023 sys 0m5.870s 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.023 ************************************ 00:11:22.023 END TEST nvmf_nmic 00:11:22.023 ************************************ 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.023 ************************************ 00:11:22.023 START TEST nvmf_fio_target 00:11:22.023 ************************************ 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:22.023 * Looking for test storage... 00:11:22.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:22.023 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.024 --rc genhtml_branch_coverage=1 00:11:22.024 --rc genhtml_function_coverage=1 00:11:22.024 --rc genhtml_legend=1 00:11:22.024 --rc geninfo_all_blocks=1 00:11:22.024 --rc geninfo_unexecuted_blocks=1 00:11:22.024 00:11:22.024 ' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.024 --rc genhtml_branch_coverage=1 00:11:22.024 --rc genhtml_function_coverage=1 00:11:22.024 --rc genhtml_legend=1 00:11:22.024 --rc geninfo_all_blocks=1 00:11:22.024 --rc geninfo_unexecuted_blocks=1 00:11:22.024 00:11:22.024 ' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.024 --rc genhtml_branch_coverage=1 00:11:22.024 --rc genhtml_function_coverage=1 00:11:22.024 --rc genhtml_legend=1 00:11:22.024 --rc geninfo_all_blocks=1 00:11:22.024 --rc geninfo_unexecuted_blocks=1 00:11:22.024 00:11:22.024 ' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.024 --rc genhtml_branch_coverage=1 00:11:22.024 --rc genhtml_function_coverage=1 00:11:22.024 --rc genhtml_legend=1 00:11:22.024 --rc geninfo_all_blocks=1 00:11:22.024 --rc geninfo_unexecuted_blocks=1 00:11:22.024 00:11:22.024 ' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.024 05:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:28.599 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:28.599 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:28.599 Found net devices under 0000:af:00.0: cvl_0_0 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:28.599 Found net devices under 0000:af:00.1: cvl_0_1 00:11:28.599 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:11:28.600 00:11:28.600 --- 10.0.0.2 ping statistics --- 00:11:28.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.600 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:28.600 00:11:28.600 --- 10.0.0.1 ping statistics --- 00:11:28.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.600 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=17087 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 17087 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 17087 ']' 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.600 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.860 [2024-12-10 05:36:46.579768] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:11:28.860 [2024-12-10 05:36:46.579819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.860 [2024-12-10 05:36:46.667302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.860 [2024-12-10 05:36:46.707974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.860 [2024-12-10 05:36:46.708012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.860 [2024-12-10 05:36:46.708022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.860 [2024-12-10 05:36:46.708028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.860 [2024-12-10 05:36:46.708034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.860 [2024-12-10 05:36:46.709618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.860 [2024-12-10 05:36:46.709725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.860 [2024-12-10 05:36:46.709834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.860 [2024-12-10 05:36:46.709835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.797 [2024-12-10 05:36:47.614208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.797 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.056 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:30.056 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.315 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:30.315 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.574 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:30.574 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.574 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:30.574 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:30.833 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.092 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:31.092 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.352 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:31.352 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.611 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:31.611 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:31.611 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.869 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:31.869 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.128 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:32.128 05:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:32.386 05:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.645 [2024-12-10 05:36:50.353683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.645 05:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:32.645 05:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:32.904 05:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.282 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:34.282 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.282 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.282 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:34.282 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:34.282 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:36.186 05:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:36.186 [global] 00:11:36.186 thread=1 00:11:36.186 invalidate=1 00:11:36.186 rw=write 00:11:36.186 time_based=1 00:11:36.186 runtime=1 00:11:36.186 ioengine=libaio 00:11:36.186 direct=1 00:11:36.186 bs=4096 00:11:36.186 iodepth=1 00:11:36.186 norandommap=0 00:11:36.186 numjobs=1 00:11:36.186 00:11:36.186 verify_dump=1 00:11:36.186 verify_backlog=512 00:11:36.186 verify_state_save=0 00:11:36.186 do_verify=1 00:11:36.186 verify=crc32c-intel 00:11:36.186 [job0] 00:11:36.186 filename=/dev/nvme0n1 00:11:36.186 [job1] 00:11:36.186 filename=/dev/nvme0n2 00:11:36.186 [job2] 00:11:36.186 filename=/dev/nvme0n3 00:11:36.186 [job3] 00:11:36.186 filename=/dev/nvme0n4 00:11:36.186 Could not set queue depth (nvme0n1) 00:11:36.186 Could not set queue depth (nvme0n2) 00:11:36.186 Could not set queue depth (nvme0n3) 00:11:36.186 Could not set queue depth (nvme0n4) 00:11:36.445 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.445 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.445 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.445 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.445 fio-3.35 00:11:36.445 Starting 4 threads 00:11:37.823 00:11:37.823 job0: (groupid=0, jobs=1): err= 0: pid=18648: Tue Dec 10 05:36:55 2024 00:11:37.823 read: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec) 00:11:37.823 slat (nsec): min=8902, max=24926, avg=10237.56, stdev=2308.12 00:11:37.823 clat (usec): min=198, max=42053, avg=1154.94, stdev=5840.32 00:11:37.823 lat (usec): min=208, max=42077, avg=1165.18, stdev=5842.11 00:11:37.823 clat percentiles (usec): 00:11:37.823 | 1.00th=[ 219], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 265], 00:11:37.823 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 310], 00:11:37.823 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 478], 00:11:37.823 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:37.823 | 99.99th=[42206] 00:11:37.823 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:37.823 slat (usec): min=9, max=15102, avg=27.26, stdev=471.57 00:11:37.823 clat (usec): min=119, max=302, avg=169.32, stdev=26.85 00:11:37.823 lat (usec): min=130, max=15287, avg=196.59, stdev=472.83 00:11:37.823 clat percentiles (usec): 00:11:37.823 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:11:37.823 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:11:37.823 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 208], 95.00th=[ 241], 00:11:37.823 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 302], 00:11:37.823 | 99.99th=[ 302] 00:11:37.823 bw ( KiB/s): min= 4096, max= 4096, per=17.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:37.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:37.823 lat (usec) : 250=62.11%, 500=36.42%, 750=0.65% 00:11:37.823 lat (msec) : 50=0.82% 00:11:37.823 cpu : usr=0.90%, sys=3.10%, ctx=1708, majf=0, minf=2 00:11:37.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.823 issued rwts: total=681,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.823 job1: (groupid=0, jobs=1): err= 0: pid=18649: Tue Dec 10 05:36:55 2024 00:11:37.823 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:11:37.823 slat (nsec): min=8878, max=25719, avg=21854.96, stdev=3940.73 00:11:37.823 clat (usec): min=382, max=41981, avg=39332.13, stdev=8301.29 00:11:37.823 lat (usec): min=408, max=42004, avg=39353.98, stdev=8300.51 00:11:37.823 clat percentiles (usec): 00:11:37.823 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:11:37.823 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:37.823 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:37.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:37.823 | 99.99th=[42206] 00:11:37.823 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:11:37.823 slat (nsec): min=9031, max=39813, avg=10253.06, stdev=1655.41 00:11:37.823 clat (usec): min=127, max=249, avg=160.30, stdev=16.46 00:11:37.823 lat (usec): min=137, max=289, avg=170.56, stdev=16.85 00:11:37.823 clat percentiles (usec): 00:11:37.823 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:11:37.823 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:11:37.823 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:11:37.823 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 249], 99.95th=[ 249], 00:11:37.823 | 99.99th=[ 249] 00:11:37.823 bw ( KiB/s): min= 4096, max= 4096, per=17.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:37.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:37.823 lat (usec) : 250=95.52%, 500=0.19% 00:11:37.823 lat (msec) : 50=4.29% 00:11:37.823 cpu : usr=0.29%, sys=0.48%, ctx=536, majf=0, minf=1 00:11:37.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.823 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.823 job2: (groupid=0, jobs=1): err= 0: pid=18650: Tue Dec 10 05:36:55 2024 00:11:37.823 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:37.823 slat (nsec): min=7909, max=43209, avg=9330.11, stdev=1763.61 00:11:37.823 clat (usec): min=200, max=522, avg=282.23, stdev=54.02 00:11:37.823 lat (usec): min=209, max=531, avg=291.56, stdev=54.15 00:11:37.823 clat percentiles (usec): 00:11:37.823 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:11:37.823 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:11:37.823 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 351], 95.00th=[ 408], 00:11:37.823 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 515], 99.95th=[ 519], 00:11:37.823 | 99.99th=[ 523] 00:11:37.823 write: IOPS=2083, BW=8336KiB/s (8536kB/s)(8344KiB/1001msec); 0 zone resets 00:11:37.823 slat (nsec): min=11047, max=41114, avg=13637.44, stdev=3228.31 00:11:37.823 clat (usec): min=125, max=339, avg=172.47, stdev=22.22 00:11:37.823 lat (usec): min=138, max=375, avg=186.11, stdev=23.30 00:11:37.823 clat percentiles (usec): 00:11:37.823 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:11:37.823 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:11:37.823 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 219], 00:11:37.823 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 269], 99.95th=[ 269], 00:11:37.823 | 99.99th=[ 338] 00:11:37.823 bw ( KiB/s): min= 8272, max= 8272, per=34.56%, avg=8272.00, stdev= 0.00, samples=1 00:11:37.823 iops : min= 2068, max= 2068, avg=2068.00, stdev= 0.00, samples=1 00:11:37.823 lat (usec) : 250=64.68%, 500=35.05%, 750=0.27% 00:11:37.823 cpu : usr=3.50%, sys=7.20%, ctx=4136, majf=0, minf=1 00:11:37.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.823 issued rwts: total=2048,2086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.823 job3: (groupid=0, jobs=1): err= 0: pid=18651: Tue Dec 10 05:36:55 2024 00:11:37.823 read: IOPS=2309, BW=9239KiB/s (9460kB/s)(9248KiB/1001msec) 00:11:37.823 slat (nsec): min=7101, max=24671, avg=8236.19, stdev=1129.30 00:11:37.823 clat (usec): min=176, max=1010, avg=230.14, stdev=36.19 00:11:37.824 lat (usec): min=184, max=1018, avg=238.37, stdev=36.27 00:11:37.824 clat percentiles (usec): 00:11:37.824 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:11:37.824 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:11:37.824 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 277], 00:11:37.824 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 578], 99.95th=[ 586], 00:11:37.824 | 99.99th=[ 1012] 00:11:37.824 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:37.824 slat (nsec): min=10064, max=45663, avg=11470.71, stdev=2045.44 00:11:37.824 clat (usec): min=113, max=399, avg=158.36, stdev=17.58 00:11:37.824 lat (usec): min=130, max=410, avg=169.83, stdev=18.00 00:11:37.824 clat percentiles (usec): 00:11:37.824 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:11:37.824 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:11:37.824 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:11:37.824 | 99.00th=[ 204], 99.50th=[ 223], 99.90th=[ 371], 99.95th=[ 379], 00:11:37.824 | 99.99th=[ 400] 00:11:37.824 bw ( KiB/s): min=10832, max=10832, per=45.25%, avg=10832.00, stdev= 0.00, samples=1 00:11:37.824 iops : min= 2708, max= 2708, avg=2708.00, stdev= 0.00, samples=1 00:11:37.824 lat (usec) : 250=92.45%, 500=7.49%, 750=0.04% 00:11:37.824 lat (msec) : 2=0.02% 00:11:37.824 cpu : usr=4.80%, sys=7.00%, ctx=4872, majf=0, minf=2 00:11:37.824 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.824 issued rwts: total=2312,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.824 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.824 00:11:37.824 Run status group 0 (all jobs): 00:11:37.824 READ: bw=19.2MiB/s (20.1MB/s), 92.9KiB/s-9239KiB/s (95.2kB/s-9460kB/s), io=19.8MiB (20.7MB), run=1001-1033msec 00:11:37.824 WRITE: bw=23.4MiB/s (24.5MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=24.1MiB (25.3MB), run=1001-1033msec 00:11:37.824 00:11:37.824 Disk stats (read/write): 00:11:37.824 nvme0n1: ios=507/512, merge=0/0, ticks=809/85, in_queue=894, util=85.07% 00:11:37.824 nvme0n2: ios=69/512, merge=0/0, ticks=807/81, in_queue=888, util=89.64% 00:11:37.824 nvme0n3: ios=1629/2048, merge=0/0, ticks=1387/316, in_queue=1703, util=92.19% 00:11:37.824 nvme0n4: ios=2056/2048, merge=0/0, ticks=521/308, in_queue=829, util=95.53% 00:11:37.824 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:37.824 [global] 00:11:37.824 thread=1 00:11:37.824 invalidate=1 00:11:37.824 rw=randwrite 00:11:37.824 time_based=1 00:11:37.824 runtime=1 00:11:37.824 ioengine=libaio 00:11:37.824 direct=1 00:11:37.824 bs=4096 00:11:37.824 iodepth=1 00:11:37.824 norandommap=0 00:11:37.824 numjobs=1 00:11:37.824 00:11:37.824 verify_dump=1 00:11:37.824 verify_backlog=512 00:11:37.824 verify_state_save=0 00:11:37.824 do_verify=1 00:11:37.824 verify=crc32c-intel 00:11:37.824 [job0] 00:11:37.824 filename=/dev/nvme0n1 00:11:37.824 [job1] 00:11:37.824 filename=/dev/nvme0n2 00:11:37.824 [job2] 00:11:37.824 filename=/dev/nvme0n3 00:11:37.824 [job3] 00:11:37.824 filename=/dev/nvme0n4 00:11:37.824 Could not set queue depth (nvme0n1) 00:11:37.824 Could not set queue depth (nvme0n2) 00:11:37.824 Could not set queue depth (nvme0n3) 00:11:37.824 Could not set queue depth (nvme0n4) 00:11:38.083 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.083 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.083 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.083 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.083 fio-3.35 00:11:38.083 Starting 4 threads 00:11:39.484 00:11:39.484 job0: (groupid=0, jobs=1): err= 0: pid=19017: Tue Dec 10 05:36:57 2024 00:11:39.484 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:11:39.484 slat (nsec): min=10254, max=34604, avg=19953.82, stdev=8728.97 00:11:39.484 clat (usec): min=40518, max=42058, avg=41126.38, stdev=398.37 00:11:39.484 lat (usec): min=40528, max=42082, avg=41146.33, stdev=399.97 00:11:39.484 clat percentiles (usec): 00:11:39.484 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:39.484 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:39.484 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:39.484 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:39.484 | 99.99th=[42206] 00:11:39.484 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:11:39.484 slat (nsec): min=5945, max=28109, avg=12103.72, stdev=2601.87 00:11:39.484 clat (usec): min=137, max=416, avg=224.42, stdev=34.65 00:11:39.484 lat (usec): min=148, max=426, avg=236.52, stdev=34.79 00:11:39.484 clat percentiles (usec): 00:11:39.484 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 180], 20.00th=[ 202], 00:11:39.484 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:11:39.484 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 285], 00:11:39.484 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 416], 99.95th=[ 416], 00:11:39.484 | 99.99th=[ 416] 00:11:39.484 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:39.484 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:39.484 lat (usec) : 250=79.03%, 500=16.85% 00:11:39.484 lat (msec) : 50=4.12% 00:11:39.485 cpu : usr=0.39%, sys=0.88%, ctx=537, majf=0, minf=1 00:11:39.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.485 job1: (groupid=0, jobs=1): err= 0: pid=19018: Tue Dec 10 05:36:57 2024 00:11:39.485 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:11:39.485 slat (nsec): min=9940, max=23290, avg=21986.77, stdev=2712.61 00:11:39.485 clat (usec): min=40857, max=42025, avg=41153.96, stdev=385.49 00:11:39.485 lat (usec): min=40879, max=42048, avg=41175.94, stdev=385.42 00:11:39.485 clat percentiles (usec): 00:11:39.485 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:39.485 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:39.485 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:39.485 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:39.485 | 99.99th=[42206] 00:11:39.485 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:11:39.485 slat (nsec): min=8912, max=40748, avg=10540.24, stdev=1819.24 00:11:39.485 clat (usec): min=131, max=342, avg=221.25, stdev=36.21 00:11:39.485 lat (usec): min=140, max=383, avg=231.79, stdev=36.33 00:11:39.485 clat percentiles (usec): 00:11:39.485 | 1.00th=[ 135], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 196], 00:11:39.485 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:11:39.485 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 277], 00:11:39.485 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 343], 99.95th=[ 343], 00:11:39.485 | 99.99th=[ 343] 00:11:39.485 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:39.485 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:39.485 lat (usec) : 250=83.71%, 500=12.17% 00:11:39.485 lat (msec) : 50=4.12% 00:11:39.485 cpu : usr=0.59%, sys=0.20%, ctx=534, majf=0, minf=2 00:11:39.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.485 job2: (groupid=0, jobs=1): err= 0: pid=19019: Tue Dec 10 05:36:57 2024 00:11:39.485 read: IOPS=23, BW=95.8KiB/s (98.1kB/s)(96.0KiB/1002msec) 00:11:39.485 slat (nsec): min=9187, max=27268, avg=22189.50, stdev=4696.51 00:11:39.485 clat (usec): min=248, max=42081, avg=37828.82, stdev=11569.40 00:11:39.485 lat (usec): min=259, max=42104, avg=37851.01, stdev=11570.31 00:11:39.485 clat percentiles (usec): 00:11:39.485 | 1.00th=[ 249], 5.00th=[ 343], 10.00th=[40633], 20.00th=[41157], 00:11:39.485 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:39.485 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:39.485 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:39.485 | 99.99th=[42206] 00:11:39.485 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:39.485 slat (nsec): min=10832, max=38854, avg=13099.59, stdev=2182.43 00:11:39.485 clat (usec): min=141, max=316, avg=166.17, stdev=14.86 00:11:39.485 lat (usec): min=153, max=328, avg=179.27, stdev=15.24 00:11:39.485 clat percentiles (usec): 00:11:39.485 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:11:39.485 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:11:39.485 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 192], 00:11:39.485 | 99.00th=[ 217], 99.50th=[ 241], 99.90th=[ 318], 99.95th=[ 318], 00:11:39.485 | 99.99th=[ 318] 00:11:39.485 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:39.485 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:39.485 lat (usec) : 250=95.52%, 500=0.37% 00:11:39.485 lat (msec) : 50=4.10% 00:11:39.485 cpu : usr=0.60%, sys=0.40%, ctx=539, majf=0, minf=1 00:11:39.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.485 job3: (groupid=0, jobs=1): err= 0: pid=19020: Tue Dec 10 05:36:57 2024 00:11:39.485 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:11:39.485 slat (nsec): min=10050, max=26693, avg=24151.41, stdev=3589.14 00:11:39.485 clat (usec): min=40882, max=41957, avg=41025.76, stdev=225.79 00:11:39.485 lat (usec): min=40904, max=41983, avg=41049.91, stdev=225.34 00:11:39.485 clat percentiles (usec): 00:11:39.485 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:39.485 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:39.485 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:39.485 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:39.485 | 99.99th=[42206] 00:11:39.485 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:11:39.485 slat (nsec): min=9994, max=40012, avg=13061.25, stdev=2304.87 00:11:39.485 clat (usec): min=132, max=361, avg=223.70, stdev=40.10 00:11:39.485 lat (usec): min=146, max=401, avg=236.76, stdev=40.70 00:11:39.485 clat percentiles (usec): 00:11:39.485 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 190], 00:11:39.485 | 30.00th=[ 206], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:11:39.485 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 302], 00:11:39.485 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 363], 00:11:39.485 | 99.99th=[ 363] 00:11:39.485 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:39.485 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:39.485 lat (usec) : 250=75.84%, 500=20.04% 00:11:39.485 lat (msec) : 50=4.12% 00:11:39.485 cpu : usr=0.10%, sys=1.27%, ctx=534, majf=0, minf=2 00:11:39.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.485 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.485 00:11:39.485 Run status group 0 (all jobs): 00:11:39.485 READ: bw=350KiB/s (359kB/s), 85.6KiB/s-95.8KiB/s (87.7kB/s-98.1kB/s), io=360KiB (369kB), run=1002-1028msec 00:11:39.485 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2044KiB/s (2040kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1028msec 00:11:39.485 00:11:39.485 Disk stats (read/write): 00:11:39.485 nvme0n1: ios=42/512, merge=0/0, ticks=1645/109, in_queue=1754, util=93.29% 00:11:39.485 nvme0n2: ios=67/512, merge=0/0, ticks=767/110, in_queue=877, util=90.13% 00:11:39.485 nvme0n3: ios=69/512, merge=0/0, ticks=1269/84, in_queue=1353, util=96.75% 00:11:39.485 nvme0n4: ios=17/512, merge=0/0, ticks=697/109, in_queue=806, util=89.63% 00:11:39.485 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:39.485 [global] 00:11:39.485 thread=1 00:11:39.485 invalidate=1 00:11:39.485 rw=write 00:11:39.485 time_based=1 00:11:39.485 runtime=1 00:11:39.485 ioengine=libaio 00:11:39.485 direct=1 00:11:39.485 bs=4096 00:11:39.485 iodepth=128 00:11:39.485 norandommap=0 00:11:39.485 numjobs=1 00:11:39.485 00:11:39.485 verify_dump=1 00:11:39.485 verify_backlog=512 00:11:39.485 verify_state_save=0 00:11:39.485 do_verify=1 00:11:39.485 verify=crc32c-intel 00:11:39.485 [job0] 00:11:39.485 filename=/dev/nvme0n1 00:11:39.485 [job1] 00:11:39.485 filename=/dev/nvme0n2 00:11:39.485 [job2] 00:11:39.485 filename=/dev/nvme0n3 00:11:39.485 [job3] 00:11:39.485 filename=/dev/nvme0n4 00:11:39.485 Could not set queue depth (nvme0n1) 00:11:39.485 Could not set queue depth (nvme0n2) 00:11:39.485 Could not set queue depth (nvme0n3) 00:11:39.485 Could not set queue depth (nvme0n4) 00:11:39.742 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.742 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.742 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.742 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.742 fio-3.35 00:11:39.742 Starting 4 threads 00:11:41.112 00:11:41.112 job0: (groupid=0, jobs=1): err= 0: pid=19391: Tue Dec 10 05:36:58 2024 00:11:41.112 read: IOPS=3350, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1011msec) 00:11:41.112 slat (nsec): min=1429, max=6291.2k, avg=97629.79, stdev=551878.80 00:11:41.112 clat (usec): min=7562, max=43160, avg=12383.78, stdev=2842.56 00:11:41.112 lat (usec): min=7573, max=43166, avg=12481.41, stdev=2893.90 00:11:41.112 clat percentiles (usec): 00:11:41.112 | 1.00th=[ 8356], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10421], 00:11:41.112 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12256], 60.00th=[12518], 00:11:41.112 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14615], 95.00th=[17171], 00:11:41.112 | 99.00th=[21365], 99.50th=[23725], 99.90th=[43254], 99.95th=[43254], 00:11:41.112 | 99.99th=[43254] 00:11:41.112 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:11:41.112 slat (usec): min=2, max=19518, avg=180.85, stdev=1017.52 00:11:41.112 clat (usec): min=6274, max=61838, avg=23045.95, stdev=12983.82 00:11:41.112 lat (usec): min=6286, max=61850, avg=23226.79, stdev=13062.77 00:11:41.112 clat percentiles (usec): 00:11:41.112 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10945], 00:11:41.112 | 30.00th=[16712], 40.00th=[17957], 50.00th=[20579], 60.00th=[21103], 00:11:41.112 | 70.00th=[24773], 80.00th=[28967], 90.00th=[47449], 95.00th=[52691], 00:11:41.112 | 99.00th=[60031], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:11:41.112 | 99.99th=[61604] 00:11:41.112 bw ( KiB/s): min=12288, max=16384, per=20.60%, avg=14336.00, stdev=2896.31, samples=2 00:11:41.112 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:41.112 lat (msec) : 10=11.22%, 20=59.66%, 50=25.26%, 100=3.86% 00:11:41.112 cpu : usr=2.97%, sys=4.75%, ctx=376, majf=0, minf=1 00:11:41.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:41.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.112 issued rwts: total=3387,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.112 job1: (groupid=0, jobs=1): err= 0: pid=19392: Tue Dec 10 05:36:58 2024 00:11:41.112 read: IOPS=5618, BW=21.9MiB/s (23.0MB/s)(22.1MiB/1005msec) 00:11:41.112 slat (nsec): min=1131, max=28236k, avg=91754.05, stdev=793761.37 00:11:41.112 clat (usec): min=1745, max=56220, avg=12552.61, stdev=7428.57 00:11:41.112 lat (usec): min=1840, max=56236, avg=12644.37, stdev=7481.18 00:11:41.112 clat percentiles (usec): 00:11:41.112 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 7046], 20.00th=[ 8586], 00:11:41.112 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10552], 00:11:41.113 | 70.00th=[12125], 80.00th=[15008], 90.00th=[21103], 95.00th=[26346], 00:11:41.113 | 99.00th=[44827], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:11:41.113 | 99.99th=[56361] 00:11:41.113 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:11:41.113 slat (usec): min=2, max=10096, avg=67.44, stdev=438.79 00:11:41.113 clat (usec): min=1141, max=26829, avg=9219.90, stdev=2385.98 00:11:41.113 lat (usec): min=1150, max=26834, avg=9287.33, stdev=2424.74 00:11:41.113 clat percentiles (usec): 00:11:41.113 | 1.00th=[ 3294], 5.00th=[ 5276], 10.00th=[ 6849], 20.00th=[ 7832], 00:11:41.113 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:11:41.113 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[12256], 00:11:41.113 | 99.00th=[20055], 99.50th=[21103], 99.90th=[22938], 99.95th=[23725], 00:11:41.113 | 99.99th=[26870] 00:11:41.113 bw ( KiB/s): min=23952, max=24304, per=34.67%, avg=24128.00, stdev=248.90, samples=2 00:11:41.113 iops : min= 5988, max= 6076, avg=6032.00, stdev=62.23, samples=2 00:11:41.113 lat (msec) : 2=0.07%, 4=0.93%, 10=66.86%, 20=26.19%, 50=5.67% 00:11:41.113 lat (msec) : 100=0.28% 00:11:41.113 cpu : usr=4.88%, sys=7.97%, ctx=441, majf=0, minf=2 00:11:41.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:41.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.113 issued rwts: total=5647,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.113 job2: (groupid=0, jobs=1): err= 0: pid=19393: Tue Dec 10 05:36:58 2024 00:11:41.113 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:11:41.113 slat (nsec): min=1498, max=14760k, avg=119079.19, stdev=829102.54 00:11:41.113 clat (usec): min=3825, max=46853, avg=14050.54, stdev=4967.09 00:11:41.113 lat (usec): min=3832, max=46859, avg=14169.62, stdev=5023.27 00:11:41.113 clat percentiles (usec): 00:11:41.113 | 1.00th=[ 5997], 5.00th=[ 7832], 10.00th=[10552], 20.00th=[11338], 00:11:41.113 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:11:41.113 | 70.00th=[14222], 80.00th=[16450], 90.00th=[19792], 95.00th=[23462], 00:11:41.113 | 99.00th=[32900], 99.50th=[41681], 99.90th=[46924], 99.95th=[46924], 00:11:41.113 | 99.99th=[46924] 00:11:41.113 write: IOPS=4235, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1010msec); 0 zone resets 00:11:41.113 slat (usec): min=2, max=30305, avg=113.81, stdev=665.71 00:11:41.113 clat (usec): min=1586, max=46856, avg=15605.70, stdev=9443.27 00:11:41.113 lat (usec): min=1609, max=46868, avg=15719.51, stdev=9509.10 00:11:41.113 clat percentiles (usec): 00:11:41.113 | 1.00th=[ 3228], 5.00th=[ 5735], 10.00th=[ 8291], 20.00th=[10552], 00:11:41.113 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12518], 60.00th=[13173], 00:11:41.113 | 70.00th=[14353], 80.00th=[19530], 90.00th=[31851], 95.00th=[40109], 00:11:41.113 | 99.00th=[44303], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:11:41.113 | 99.99th=[46924] 00:11:41.113 bw ( KiB/s): min=16368, max=16840, per=23.86%, avg=16604.00, stdev=333.75, samples=2 00:11:41.113 iops : min= 4092, max= 4210, avg=4151.00, stdev=83.44, samples=2 00:11:41.113 lat (msec) : 2=0.02%, 4=1.18%, 10=11.81%, 20=73.41%, 50=13.58% 00:11:41.113 cpu : usr=3.17%, sys=5.45%, ctx=525, majf=0, minf=1 00:11:41.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:41.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.113 issued rwts: total=4096,4278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.113 job3: (groupid=0, jobs=1): err= 0: pid=19394: Tue Dec 10 05:36:58 2024 00:11:41.113 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1011msec) 00:11:41.113 slat (nsec): min=1493, max=12032k, avg=133361.43, stdev=799443.92 00:11:41.113 clat (usec): min=4861, max=44358, avg=14692.28, stdev=6477.89 00:11:41.113 lat (usec): min=4868, max=44367, avg=14825.64, stdev=6541.60 00:11:41.113 clat percentiles (usec): 00:11:41.113 | 1.00th=[ 5669], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10421], 00:11:41.113 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:11:41.113 | 70.00th=[13960], 80.00th=[15795], 90.00th=[23200], 95.00th=[28705], 00:11:41.113 | 99.00th=[40633], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:11:41.113 | 99.99th=[44303] 00:11:41.113 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:11:41.113 slat (usec): min=2, max=10680, avg=155.63, stdev=619.87 00:11:41.113 clat (usec): min=1515, max=46387, avg=22761.51, stdev=11050.54 00:11:41.113 lat (usec): min=1529, max=46391, avg=22917.14, stdev=11124.85 00:11:41.113 clat percentiles (usec): 00:11:41.113 | 1.00th=[ 3458], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[12387], 00:11:41.113 | 30.00th=[16712], 40.00th=[20055], 50.00th=[20841], 60.00th=[23725], 00:11:41.113 | 70.00th=[29492], 80.00th=[33817], 90.00th=[39584], 95.00th=[41681], 00:11:41.113 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:11:41.113 | 99.99th=[46400] 00:11:41.113 bw ( KiB/s): min=13680, max=14784, per=20.45%, avg=14232.00, stdev=780.65, samples=2 00:11:41.113 iops : min= 3420, max= 3696, avg=3558.00, stdev=195.16, samples=2 00:11:41.113 lat (msec) : 2=0.04%, 4=0.80%, 10=14.58%, 20=45.95%, 50=38.64% 00:11:41.113 cpu : usr=2.87%, sys=4.26%, ctx=437, majf=0, minf=1 00:11:41.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:41.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.113 issued rwts: total=3174,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.113 00:11:41.113 Run status group 0 (all jobs): 00:11:41.113 READ: bw=63.0MiB/s (66.1MB/s), 12.3MiB/s-21.9MiB/s (12.9MB/s-23.0MB/s), io=63.7MiB (66.8MB), run=1005-1011msec 00:11:41.113 WRITE: bw=68.0MiB/s (71.3MB/s), 13.8MiB/s-23.9MiB/s (14.5MB/s-25.0MB/s), io=68.7MiB (72.0MB), run=1005-1011msec 00:11:41.113 00:11:41.113 Disk stats (read/write): 00:11:41.113 nvme0n1: ios=3107/3072, merge=0/0, ticks=18491/32703, in_queue=51194, util=93.59% 00:11:41.113 nvme0n2: ios=4638/5120, merge=0/0, ticks=39786/33752, in_queue=73538, util=97.76% 00:11:41.113 nvme0n3: ios=3094/3487, merge=0/0, ticks=44052/57321, in_queue=101373, util=97.70% 00:11:41.113 nvme0n4: ios=2599/3039, merge=0/0, ticks=35459/68646, in_queue=104105, util=96.09% 00:11:41.113 05:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:41.113 [global] 00:11:41.113 thread=1 00:11:41.113 invalidate=1 00:11:41.113 rw=randwrite 00:11:41.113 time_based=1 00:11:41.113 runtime=1 00:11:41.113 ioengine=libaio 00:11:41.113 direct=1 00:11:41.113 bs=4096 00:11:41.113 iodepth=128 00:11:41.113 norandommap=0 00:11:41.113 numjobs=1 00:11:41.113 00:11:41.113 verify_dump=1 00:11:41.113 verify_backlog=512 00:11:41.113 verify_state_save=0 00:11:41.113 do_verify=1 00:11:41.113 verify=crc32c-intel 00:11:41.113 [job0] 00:11:41.113 filename=/dev/nvme0n1 00:11:41.113 [job1] 00:11:41.113 filename=/dev/nvme0n2 00:11:41.113 [job2] 00:11:41.113 filename=/dev/nvme0n3 00:11:41.113 [job3] 00:11:41.113 filename=/dev/nvme0n4 00:11:41.113 Could not set queue depth (nvme0n1) 00:11:41.113 Could not set queue depth (nvme0n2) 00:11:41.113 Could not set queue depth (nvme0n3) 00:11:41.113 Could not set queue depth (nvme0n4) 00:11:41.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.113 fio-3.35 00:11:41.113 Starting 4 threads 00:11:42.485 00:11:42.485 job0: (groupid=0, jobs=1): err= 0: pid=19760: Tue Dec 10 05:37:00 2024 00:11:42.485 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:11:42.485 slat (nsec): min=1250, max=15488k, avg=144858.69, stdev=1047585.68 00:11:42.485 clat (usec): min=2382, max=66591, avg=17072.73, stdev=9411.48 00:11:42.485 lat (usec): min=4966, max=66599, avg=17217.59, stdev=9503.25 00:11:42.485 clat percentiles (usec): 00:11:42.485 | 1.00th=[ 5669], 5.00th=[10159], 10.00th=[10683], 20.00th=[10945], 00:11:42.485 | 30.00th=[11207], 40.00th=[11863], 50.00th=[13435], 60.00th=[15533], 00:11:42.485 | 70.00th=[17171], 80.00th=[22152], 90.00th=[29492], 95.00th=[32900], 00:11:42.485 | 99.00th=[54789], 99.50th=[57410], 99.90th=[66323], 99.95th=[66847], 00:11:42.485 | 99.99th=[66847] 00:11:42.485 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:11:42.485 slat (nsec): min=1910, max=12430k, avg=118809.82, stdev=721343.25 00:11:42.485 clat (usec): min=1568, max=71364, avg=18652.36, stdev=13410.51 00:11:42.485 lat (usec): min=1575, max=71371, avg=18771.17, stdev=13486.14 00:11:42.485 clat percentiles (usec): 00:11:42.485 | 1.00th=[ 2868], 5.00th=[ 4621], 10.00th=[ 6783], 20.00th=[ 8586], 00:11:42.485 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[12780], 60.00th=[20055], 00:11:42.485 | 70.00th=[21627], 80.00th=[28967], 90.00th=[39060], 95.00th=[43779], 00:11:42.485 | 99.00th=[62653], 99.50th=[65274], 99.90th=[71828], 99.95th=[71828], 00:11:42.485 | 99.99th=[71828] 00:11:42.485 bw ( KiB/s): min=12288, max=16384, per=19.64%, avg=14336.00, stdev=2896.31, samples=2 00:11:42.485 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:42.485 lat (msec) : 2=0.36%, 4=1.26%, 10=17.94%, 20=48.85%, 50=29.07% 00:11:42.485 lat (msec) : 100=2.51% 00:11:42.485 cpu : usr=2.08%, sys=4.27%, ctx=290, majf=0, minf=1 00:11:42.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:42.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.485 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.485 job1: (groupid=0, jobs=1): err= 0: pid=19763: Tue Dec 10 05:37:00 2024 00:11:42.485 read: IOPS=5319, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1010msec) 00:11:42.485 slat (nsec): min=1457, max=15390k, avg=109710.32, stdev=840761.86 00:11:42.485 clat (msec): min=3, max=135, avg=12.76, stdev=13.54 00:11:42.485 lat (msec): min=3, max=135, avg=12.87, stdev=13.66 00:11:42.485 clat percentiles (msec): 00:11:42.485 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:11:42.485 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:11:42.485 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 16], 95.00th=[ 18], 00:11:42.485 | 99.00th=[ 107], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:11:42.485 | 99.99th=[ 136] 00:11:42.485 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:11:42.485 slat (usec): min=2, max=8113, avg=66.80, stdev=463.48 00:11:42.485 clat (usec): min=1413, max=135335, avg=10529.05, stdev=9886.42 00:11:42.485 lat (usec): min=1424, max=135355, avg=10595.85, stdev=9896.56 00:11:42.485 clat percentiles (msec): 00:11:42.485 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 9], 00:11:42.485 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:11:42.485 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 14], 00:11:42.485 | 99.00th=[ 70], 99.50th=[ 90], 99.90th=[ 124], 99.95th=[ 124], 00:11:42.485 | 99.99th=[ 136] 00:11:42.485 bw ( KiB/s): min=18192, max=26864, per=30.86%, avg=22528.00, stdev=6132.03, samples=2 00:11:42.485 iops : min= 4548, max= 6716, avg=5632.00, stdev=1533.01, samples=2 00:11:42.485 lat (msec) : 2=0.08%, 4=1.66%, 10=64.18%, 20=31.03%, 50=1.32% 00:11:42.485 lat (msec) : 100=0.86%, 250=0.86% 00:11:42.485 cpu : usr=4.56%, sys=6.44%, ctx=477, majf=0, minf=1 00:11:42.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:42.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.485 issued rwts: total=5373,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.485 job2: (groupid=0, jobs=1): err= 0: pid=19771: Tue Dec 10 05:37:00 2024 00:11:42.485 read: IOPS=4098, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec) 00:11:42.485 slat (nsec): min=1443, max=20951k, avg=108016.83, stdev=731320.08 00:11:42.485 clat (usec): min=4173, max=46630, avg=13562.47, stdev=6563.12 00:11:42.485 lat (usec): min=4603, max=46643, avg=13670.49, stdev=6621.25 00:11:42.485 clat percentiles (usec): 00:11:42.485 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[10290], 00:11:42.485 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:11:42.485 | 70.00th=[13698], 80.00th=[14484], 90.00th=[20317], 95.00th=[32113], 00:11:42.485 | 99.00th=[38536], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:11:42.485 | 99.99th=[46400] 00:11:42.485 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:42.485 slat (usec): min=2, max=21809, avg=115.02, stdev=646.87 00:11:42.485 clat (usec): min=4650, max=93305, avg=15499.38, stdev=12276.63 00:11:42.485 lat (usec): min=4662, max=93312, avg=15614.40, stdev=12345.45 00:11:42.485 clat percentiles (usec): 00:11:42.485 | 1.00th=[ 6783], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9372], 00:11:42.485 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:11:42.485 | 70.00th=[14222], 80.00th=[20579], 90.00th=[23725], 95.00th=[30802], 00:11:42.485 | 99.00th=[84411], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:11:42.485 | 99.99th=[92799] 00:11:42.485 bw ( KiB/s): min=15592, max=20440, per=24.68%, avg=18016.00, stdev=3428.05, samples=2 00:11:42.485 iops : min= 3898, max= 5110, avg=4504.00, stdev=857.01, samples=2 00:11:42.485 lat (msec) : 10=21.35%, 20=62.69%, 50=14.42%, 100=1.55% 00:11:42.485 cpu : usr=4.48%, sys=3.98%, ctx=533, majf=0, minf=2 00:11:42.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:42.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.485 issued rwts: total=4119,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.485 job3: (groupid=0, jobs=1): err= 0: pid=19774: Tue Dec 10 05:37:00 2024 00:11:42.485 read: IOPS=4181, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1010msec) 00:11:42.486 slat (nsec): min=1464, max=27785k, avg=111387.91, stdev=799469.50 00:11:42.486 clat (usec): min=2270, max=39123, avg=13900.55, stdev=5375.95 00:11:42.486 lat (usec): min=2276, max=60362, avg=14011.94, stdev=5424.73 00:11:42.486 clat percentiles (usec): 00:11:42.486 | 1.00th=[ 3654], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10945], 00:11:42.486 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12518], 60.00th=[13698], 00:11:42.486 | 70.00th=[14746], 80.00th=[15795], 90.00th=[17957], 95.00th=[27657], 00:11:42.486 | 99.00th=[35914], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:11:42.486 | 99.99th=[39060] 00:11:42.486 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:11:42.486 slat (usec): min=2, max=15812, avg=108.23, stdev=596.38 00:11:42.486 clat (usec): min=1175, max=39247, avg=15011.52, stdev=5755.39 00:11:42.486 lat (usec): min=1186, max=39250, avg=15119.75, stdev=5798.01 00:11:42.486 clat percentiles (usec): 00:11:42.486 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10945], 00:11:42.486 | 30.00th=[11338], 40.00th=[11600], 50.00th=[13173], 60.00th=[14222], 00:11:42.486 | 70.00th=[16188], 80.00th=[20579], 90.00th=[23725], 95.00th=[26346], 00:11:42.486 | 99.00th=[32637], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:11:42.486 | 99.99th=[39060] 00:11:42.486 bw ( KiB/s): min=16440, max=20416, per=25.24%, avg=18428.00, stdev=2811.46, samples=2 00:11:42.486 iops : min= 4110, max= 5104, avg=4607.00, stdev=702.86, samples=2 00:11:42.486 lat (msec) : 2=0.02%, 4=0.63%, 10=10.41%, 20=74.53%, 50=14.40% 00:11:42.486 cpu : usr=4.46%, sys=4.26%, ctx=512, majf=0, minf=2 00:11:42.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:42.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.486 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.486 00:11:42.486 Run status group 0 (all jobs): 00:11:42.486 READ: bw=66.9MiB/s (70.1MB/s), 13.8MiB/s-20.8MiB/s (14.5MB/s-21.8MB/s), io=67.5MiB (70.8MB), run=1005-1010msec 00:11:42.486 WRITE: bw=71.3MiB/s (74.7MB/s), 13.9MiB/s-21.8MiB/s (14.5MB/s-22.8MB/s), io=72.0MiB (75.5MB), run=1005-1010msec 00:11:42.486 00:11:42.486 Disk stats (read/write): 00:11:42.486 nvme0n1: ios=2714/3072, merge=0/0, ticks=30941/38763, in_queue=69704, util=97.70% 00:11:42.486 nvme0n2: ios=5140/5512, merge=0/0, ticks=53143/48614, in_queue=101757, util=90.22% 00:11:42.486 nvme0n3: ios=3128/3559, merge=0/0, ticks=22060/29651, in_queue=51711, util=89.93% 00:11:42.486 nvme0n4: ios=3624/3999, merge=0/0, ticks=24275/32457, in_queue=56732, util=96.93% 00:11:42.486 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:42.486 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=19998 00:11:42.486 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:42.486 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:42.486 [global] 00:11:42.486 thread=1 00:11:42.486 invalidate=1 00:11:42.486 rw=read 00:11:42.486 time_based=1 00:11:42.486 runtime=10 00:11:42.486 ioengine=libaio 00:11:42.486 direct=1 00:11:42.486 bs=4096 00:11:42.486 iodepth=1 00:11:42.486 norandommap=1 00:11:42.486 numjobs=1 00:11:42.486 00:11:42.486 [job0] 00:11:42.486 filename=/dev/nvme0n1 00:11:42.486 [job1] 00:11:42.486 filename=/dev/nvme0n2 00:11:42.486 [job2] 00:11:42.486 filename=/dev/nvme0n3 00:11:42.486 [job3] 00:11:42.486 filename=/dev/nvme0n4 00:11:42.486 Could not set queue depth (nvme0n1) 00:11:42.486 Could not set queue depth (nvme0n2) 00:11:42.486 Could not set queue depth (nvme0n3) 00:11:42.486 Could not set queue depth (nvme0n4) 00:11:42.744 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.744 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.744 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.744 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.744 fio-3.35 00:11:42.744 Starting 4 threads 00:11:46.025 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:46.025 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47730688, buflen=4096 00:11:46.025 fio: pid=20288, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.025 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:46.025 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=50495488, buflen=4096 00:11:46.025 fio: pid=20275, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.025 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.025 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:46.025 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.025 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:46.283 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:11:46.283 fio: pid=20219, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.283 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48607232, buflen=4096 00:11:46.283 fio: pid=20244, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.283 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.283 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:46.283 00:11:46.283 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=20219: Tue Dec 10 05:37:04 2024 00:11:46.283 read: IOPS=26, BW=103KiB/s (105kB/s)(324KiB/3147msec) 00:11:46.283 slat (usec): min=8, max=15580, avg=512.86, stdev=2553.40 00:11:46.283 clat (usec): min=289, max=42034, avg=38067.90, stdev=10730.37 00:11:46.283 lat (usec): min=314, max=56824, avg=38442.04, stdev=10445.40 00:11:46.283 clat percentiles (usec): 00:11:46.283 | 1.00th=[ 289], 5.00th=[ 445], 10.00th=[40633], 20.00th=[41157], 00:11:46.283 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:46.283 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:46.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:46.283 | 99.99th=[42206] 00:11:46.283 bw ( KiB/s): min= 96, max= 112, per=0.24%, avg=103.83, stdev= 8.59, samples=6 00:11:46.283 iops : min= 24, max= 28, avg=25.83, stdev= 2.04, samples=6 00:11:46.283 lat (usec) : 500=6.10%, 750=1.22% 00:11:46.283 lat (msec) : 50=91.46% 00:11:46.283 cpu : usr=0.00%, sys=0.10%, ctx=86, majf=0, minf=1 00:11:46.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.283 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.283 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.283 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=20244: Tue Dec 10 05:37:04 2024 00:11:46.283 read: IOPS=3571, BW=13.9MiB/s (14.6MB/s)(46.4MiB/3323msec) 00:11:46.283 slat (usec): min=6, max=11136, avg=10.25, stdev=131.45 00:11:46.283 clat (usec): min=156, max=41702, avg=266.09, stdev=915.42 00:11:46.283 lat (usec): min=164, max=46045, avg=276.34, stdev=942.58 00:11:46.283 clat percentiles (usec): 00:11:46.283 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 219], 20.00th=[ 233], 00:11:46.283 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:46.283 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:11:46.283 | 99.00th=[ 404], 99.50th=[ 453], 99.90th=[ 603], 99.95th=[40633], 00:11:46.283 | 99.99th=[41157] 00:11:46.284 bw ( KiB/s): min=15176, max=16400, per=35.94%, avg=15542.67, stdev=438.39, samples=6 00:11:46.284 iops : min= 3794, max= 4100, avg=3885.67, stdev=109.60, samples=6 00:11:46.284 lat (usec) : 250=60.06%, 500=39.80%, 750=0.05% 00:11:46.284 lat (msec) : 2=0.02%, 4=0.01%, 50=0.05% 00:11:46.284 cpu : usr=1.84%, sys=5.93%, ctx=11871, majf=0, minf=2 00:11:46.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.284 issued rwts: total=11868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.284 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=20275: Tue Dec 10 05:37:04 2024 00:11:46.284 read: IOPS=4264, BW=16.7MiB/s (17.5MB/s)(48.2MiB/2891msec) 00:11:46.284 slat (usec): min=7, max=11390, avg=10.15, stdev=129.23 00:11:46.284 clat (usec): min=179, max=1726, avg=220.77, stdev=26.47 00:11:46.284 lat (usec): min=188, max=11662, avg=230.92, stdev=132.66 00:11:46.284 clat percentiles (usec): 00:11:46.284 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:11:46.284 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:11:46.284 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:11:46.284 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 310], 00:11:46.284 | 99.99th=[ 1532] 00:11:46.284 bw ( KiB/s): min=17024, max=17512, per=40.17%, avg=17371.20, stdev=198.84, samples=5 00:11:46.284 iops : min= 4256, max= 4378, avg=4342.80, stdev=49.71, samples=5 00:11:46.284 lat (usec) : 250=95.60%, 500=4.37% 00:11:46.284 lat (msec) : 2=0.02% 00:11:46.284 cpu : usr=2.98%, sys=6.33%, ctx=12334, majf=0, minf=2 00:11:46.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.284 issued rwts: total=12329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.284 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=20288: Tue Dec 10 05:37:04 2024 00:11:46.284 read: IOPS=4358, BW=17.0MiB/s (17.8MB/s)(45.5MiB/2674msec) 00:11:46.284 slat (nsec): min=7013, max=40887, avg=8094.26, stdev=1377.88 00:11:46.284 clat (usec): min=167, max=422, avg=217.35, stdev=15.00 00:11:46.284 lat (usec): min=178, max=455, avg=225.44, stdev=15.15 00:11:46.284 clat percentiles (usec): 00:11:46.284 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:46.284 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:11:46.284 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 243], 00:11:46.284 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 306], 00:11:46.284 | 99.99th=[ 416] 00:11:46.284 bw ( KiB/s): min=17520, max=17688, per=40.76%, avg=17630.40, stdev=67.98, samples=5 00:11:46.284 iops : min= 4380, max= 4422, avg=4407.60, stdev=16.99, samples=5 00:11:46.284 lat (usec) : 250=97.39%, 500=2.60% 00:11:46.284 cpu : usr=2.17%, sys=7.11%, ctx=11654, majf=0, minf=1 00:11:46.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.284 issued rwts: total=11654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.284 00:11:46.284 Run status group 0 (all jobs): 00:11:46.284 READ: bw=42.2MiB/s (44.3MB/s), 103KiB/s-17.0MiB/s (105kB/s-17.8MB/s), io=140MiB (147MB), run=2674-3323msec 00:11:46.284 00:11:46.284 Disk stats (read/write): 00:11:46.284 nvme0n1: ios=79/0, merge=0/0, ticks=3002/0, in_queue=3002, util=92.88% 00:11:46.284 nvme0n2: ios=11865/0, merge=0/0, ticks=2981/0, in_queue=2981, util=94.32% 00:11:46.284 nvme0n3: ios=12030/0, merge=0/0, ticks=3155/0, in_queue=3155, util=99.38% 00:11:46.284 nvme0n4: ios=11191/0, merge=0/0, ticks=2303/0, in_queue=2303, util=96.32% 00:11:46.542 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.542 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:46.801 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.801 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:47.059 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:47.059 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:47.317 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:47.317 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:47.317 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:47.317 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 19998 00:11:47.317 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:47.317 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:47.575 nvmf hotplug test: fio failed as expected 00:11:47.575 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.834 rmmod nvme_tcp 00:11:47.834 rmmod nvme_fabrics 00:11:47.834 rmmod nvme_keyring 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 17087 ']' 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 17087 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 17087 ']' 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 17087 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 17087 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 17087' 00:11:47.834 killing process with pid 17087 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 17087 00:11:47.834 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 17087 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.094 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.001 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.001 00:11:50.001 real 0m28.485s 00:11:50.001 user 1m50.977s 00:11:50.001 sys 0m9.565s 00:11:50.001 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.001 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.001 ************************************ 00:11:50.001 END TEST nvmf_fio_target 00:11:50.001 ************************************ 00:11:50.260 05:37:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:50.260 05:37:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.260 05:37:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.260 05:37:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:50.260 ************************************ 00:11:50.260 START TEST nvmf_bdevio 00:11:50.260 ************************************ 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:50.261 * Looking for test storage... 00:11:50.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:50.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.261 --rc genhtml_branch_coverage=1 00:11:50.261 --rc genhtml_function_coverage=1 00:11:50.261 --rc genhtml_legend=1 00:11:50.261 --rc geninfo_all_blocks=1 00:11:50.261 --rc geninfo_unexecuted_blocks=1 00:11:50.261 00:11:50.261 ' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:50.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.261 --rc genhtml_branch_coverage=1 00:11:50.261 --rc genhtml_function_coverage=1 00:11:50.261 --rc genhtml_legend=1 00:11:50.261 --rc geninfo_all_blocks=1 00:11:50.261 --rc geninfo_unexecuted_blocks=1 00:11:50.261 00:11:50.261 ' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:50.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.261 --rc genhtml_branch_coverage=1 00:11:50.261 --rc genhtml_function_coverage=1 00:11:50.261 --rc genhtml_legend=1 00:11:50.261 --rc geninfo_all_blocks=1 00:11:50.261 --rc geninfo_unexecuted_blocks=1 00:11:50.261 00:11:50.261 ' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:50.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.261 --rc genhtml_branch_coverage=1 00:11:50.261 --rc genhtml_function_coverage=1 00:11:50.261 --rc genhtml_legend=1 00:11:50.261 --rc geninfo_all_blocks=1 00:11:50.261 --rc geninfo_unexecuted_blocks=1 00:11:50.261 00:11:50.261 ' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.261 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.520 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.520 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.520 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.521 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.095 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:57.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:57.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:57.096 Found net devices under 0000:af:00.0: cvl_0_0 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:57.096 Found net devices under 0000:af:00.1: cvl_0_1 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:11:57.096 00:11:57.096 --- 10.0.0.2 ping statistics --- 00:11:57.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.096 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:57.096 00:11:57.096 --- 10.0.0.1 ping statistics --- 00:11:57.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.096 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.096 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=25075 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 25075 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 25075 ']' 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.096 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.354 [2024-12-10 05:37:15.095137] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:11:57.354 [2024-12-10 05:37:15.095185] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.354 [2024-12-10 05:37:15.181021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.354 [2024-12-10 05:37:15.221003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.354 [2024-12-10 05:37:15.221041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.354 [2024-12-10 05:37:15.221047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.354 [2024-12-10 05:37:15.221053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.354 [2024-12-10 05:37:15.221058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.354 [2024-12-10 05:37:15.222469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:57.354 [2024-12-10 05:37:15.222575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:57.354 [2024-12-10 05:37:15.222683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.354 [2024-12-10 05:37:15.222684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.288 [2024-12-10 05:37:15.968652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.288 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.288 Malloc0 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.288 [2024-12-10 05:37:16.028998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:58.288 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:58.288 { 00:11:58.288 "params": { 00:11:58.288 "name": "Nvme$subsystem", 00:11:58.288 "trtype": "$TEST_TRANSPORT", 00:11:58.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.289 "adrfam": "ipv4", 00:11:58.289 "trsvcid": "$NVMF_PORT", 00:11:58.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.289 "hdgst": ${hdgst:-false}, 00:11:58.289 "ddgst": ${ddgst:-false} 00:11:58.289 }, 00:11:58.289 "method": "bdev_nvme_attach_controller" 00:11:58.289 } 00:11:58.289 EOF 00:11:58.289 )") 00:11:58.289 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:58.289 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:58.289 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:58.289 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:58.289 "params": { 00:11:58.289 "name": "Nvme1", 00:11:58.289 "trtype": "tcp", 00:11:58.289 "traddr": "10.0.0.2", 00:11:58.289 "adrfam": "ipv4", 00:11:58.289 "trsvcid": "4420", 00:11:58.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.289 "hdgst": false, 00:11:58.289 "ddgst": false 00:11:58.289 }, 00:11:58.289 "method": "bdev_nvme_attach_controller" 00:11:58.289 }' 00:11:58.289 [2024-12-10 05:37:16.080410] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:11:58.289 [2024-12-10 05:37:16.080457] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid25158 ] 00:11:58.289 [2024-12-10 05:37:16.164003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.289 [2024-12-10 05:37:16.206694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.289 [2024-12-10 05:37:16.206725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.289 [2024-12-10 05:37:16.206726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.547 I/O targets: 00:11:58.547 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:58.547 00:11:58.547 00:11:58.547 CUnit - A unit testing framework for C - Version 2.1-3 00:11:58.547 http://cunit.sourceforge.net/ 00:11:58.547 00:11:58.547 00:11:58.547 Suite: bdevio tests on: Nvme1n1 00:11:58.547 Test: blockdev write read block ...passed 00:11:58.547 Test: blockdev write zeroes read block ...passed 00:11:58.805 Test: blockdev write zeroes read no split ...passed 00:11:58.805 Test: blockdev write zeroes read split ...passed 00:11:58.805 Test: blockdev write zeroes read split partial ...passed 00:11:58.805 Test: blockdev reset ...[2024-12-10 05:37:16.528031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:58.805 [2024-12-10 05:37:16.528097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d9d0 (9): Bad file descriptor 00:11:58.805 [2024-12-10 05:37:16.586875] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:58.805 passed 00:11:58.805 Test: blockdev write read 8 blocks ...passed 00:11:58.805 Test: blockdev write read size > 128k ...passed 00:11:58.805 Test: blockdev write read invalid size ...passed 00:11:58.805 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:58.805 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:58.805 Test: blockdev write read max offset ...passed 00:11:58.805 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:58.805 Test: blockdev writev readv 8 blocks ...passed 00:11:58.805 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.063 Test: blockdev writev readv block ...passed 00:11:59.063 Test: blockdev writev readv size > 128k ...passed 00:11:59.063 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.063 Test: blockdev comparev and writev ...[2024-12-10 05:37:16.838125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.063 [2024-12-10 05:37:16.838153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:59.063 [2024-12-10 05:37:16.838167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.838405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.838432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.838663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.838687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.838931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.838956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.064 [2024-12-10 05:37:16.838963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:59.064 passed 00:11:59.064 Test: blockdev nvme passthru rw ...passed 00:11:59.064 Test: blockdev nvme passthru vendor specific ...[2024-12-10 05:37:16.920589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.064 [2024-12-10 05:37:16.920606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.920714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.064 [2024-12-10 05:37:16.920726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.920830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.064 [2024-12-10 05:37:16.920841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:59.064 [2024-12-10 05:37:16.920937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.064 [2024-12-10 05:37:16.920947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:59.064 passed 00:11:59.064 Test: blockdev nvme admin passthru ...passed 00:11:59.064 Test: blockdev copy ...passed 00:11:59.064 00:11:59.064 Run Summary: Type Total Ran Passed Failed Inactive 00:11:59.064 suites 1 1 n/a 0 0 00:11:59.064 tests 23 23 23 0 0 00:11:59.064 asserts 152 152 152 0 n/a 00:11:59.064 00:11:59.064 Elapsed time = 1.151 seconds 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.323 rmmod nvme_tcp 00:11:59.323 rmmod nvme_fabrics 00:11:59.323 rmmod nvme_keyring 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 25075 ']' 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 25075 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 25075 ']' 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 25075 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 25075 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 25075' 00:11:59.323 killing process with pid 25075 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 25075 00:11:59.323 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 25075 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.583 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.122 00:12:02.122 real 0m11.488s 00:12:02.122 user 0m13.045s 00:12:02.122 sys 0m5.589s 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:02.122 ************************************ 00:12:02.122 END TEST nvmf_bdevio 00:12:02.122 ************************************ 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:02.122 00:12:02.122 real 4m50.721s 00:12:02.122 user 10m35.891s 00:12:02.122 sys 1m46.311s 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.122 ************************************ 00:12:02.122 END TEST nvmf_target_core 00:12:02.122 ************************************ 00:12:02.122 05:37:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:02.122 05:37:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.122 05:37:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.122 05:37:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.122 ************************************ 00:12:02.122 START TEST nvmf_target_extra 00:12:02.122 ************************************ 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:02.122 * Looking for test storage... 00:12:02.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.122 --rc genhtml_branch_coverage=1 00:12:02.122 --rc genhtml_function_coverage=1 00:12:02.122 --rc genhtml_legend=1 00:12:02.122 --rc geninfo_all_blocks=1 00:12:02.122 --rc geninfo_unexecuted_blocks=1 00:12:02.122 00:12:02.122 ' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.122 --rc genhtml_branch_coverage=1 00:12:02.122 --rc genhtml_function_coverage=1 00:12:02.122 --rc genhtml_legend=1 00:12:02.122 --rc geninfo_all_blocks=1 00:12:02.122 --rc geninfo_unexecuted_blocks=1 00:12:02.122 00:12:02.122 ' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.122 --rc genhtml_branch_coverage=1 00:12:02.122 --rc genhtml_function_coverage=1 00:12:02.122 --rc genhtml_legend=1 00:12:02.122 --rc geninfo_all_blocks=1 00:12:02.122 --rc geninfo_unexecuted_blocks=1 00:12:02.122 00:12:02.122 ' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.122 --rc genhtml_branch_coverage=1 00:12:02.122 --rc genhtml_function_coverage=1 00:12:02.122 --rc genhtml_legend=1 00:12:02.122 --rc geninfo_all_blocks=1 00:12:02.122 --rc geninfo_unexecuted_blocks=1 00:12:02.122 00:12:02.122 ' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.122 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.123 ************************************ 00:12:02.123 START TEST nvmf_example 00:12:02.123 ************************************ 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:02.123 * Looking for test storage... 00:12:02.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.123 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.123 --rc genhtml_branch_coverage=1 00:12:02.123 --rc genhtml_function_coverage=1 00:12:02.123 --rc genhtml_legend=1 00:12:02.123 --rc geninfo_all_blocks=1 00:12:02.123 --rc geninfo_unexecuted_blocks=1 00:12:02.123 00:12:02.123 ' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.123 --rc genhtml_branch_coverage=1 00:12:02.123 --rc genhtml_function_coverage=1 00:12:02.123 --rc genhtml_legend=1 00:12:02.123 --rc geninfo_all_blocks=1 00:12:02.123 --rc geninfo_unexecuted_blocks=1 00:12:02.123 00:12:02.123 ' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.123 --rc genhtml_branch_coverage=1 00:12:02.123 --rc genhtml_function_coverage=1 00:12:02.123 --rc genhtml_legend=1 00:12:02.123 --rc geninfo_all_blocks=1 00:12:02.123 --rc geninfo_unexecuted_blocks=1 00:12:02.123 00:12:02.123 ' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.123 --rc genhtml_branch_coverage=1 00:12:02.123 --rc genhtml_function_coverage=1 00:12:02.123 --rc genhtml_legend=1 00:12:02.123 --rc geninfo_all_blocks=1 00:12:02.123 --rc geninfo_unexecuted_blocks=1 00:12:02.123 00:12:02.123 ' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.123 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.124 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:08.698 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:08.698 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:08.698 Found net devices under 0000:af:00.0: cvl_0_0 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:08.698 Found net devices under 0000:af:00.1: cvl_0_1 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.698 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:08.957 00:12:08.957 --- 10.0.0.2 ping statistics --- 00:12:08.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.957 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:12:08.957 00:12:08.957 --- 10.0.0.1 ping statistics --- 00:12:08.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.957 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=29421 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 29421 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 29421 ']' 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.957 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:09.889 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:22.077 Initializing NVMe Controllers 00:12:22.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:22.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:22.077 Initialization complete. Launching workers. 00:12:22.077 ======================================================== 00:12:22.077 Latency(us) 00:12:22.077 Device Information : IOPS MiB/s Average min max 00:12:22.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18017.62 70.38 3551.44 674.75 17210.17 00:12:22.077 ======================================================== 00:12:22.077 Total : 18017.62 70.38 3551.44 674.75 17210.17 00:12:22.077 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.077 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.077 rmmod nvme_tcp 00:12:22.077 rmmod nvme_fabrics 00:12:22.077 rmmod nvme_keyring 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 29421 ']' 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 29421 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 29421 ']' 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 29421 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29421 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29421' 00:12:22.077 killing process with pid 29421 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 29421 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 29421 00:12:22.077 nvmf threads initialize successfully 00:12:22.077 bdev subsystem init successfully 00:12:22.077 created a nvmf target service 00:12:22.077 create targets's poll groups done 00:12:22.077 all subsystems of target started 00:12:22.077 nvmf target is running 00:12:22.077 all subsystems of target stopped 00:12:22.077 destroy targets's poll groups done 00:12:22.077 destroyed the nvmf target service 00:12:22.077 bdev subsystem finish successfully 00:12:22.077 nvmf threads destroy successfully 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.077 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.646 00:12:22.646 real 0m20.526s 00:12:22.646 user 0m45.999s 00:12:22.646 sys 0m6.657s 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.646 ************************************ 00:12:22.646 END TEST nvmf_example 00:12:22.646 ************************************ 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.646 ************************************ 00:12:22.646 START TEST nvmf_filesystem 00:12:22.646 ************************************ 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:22.646 * Looking for test storage... 00:12:22.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:22.646 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.909 --rc genhtml_branch_coverage=1 00:12:22.909 --rc genhtml_function_coverage=1 00:12:22.909 --rc genhtml_legend=1 00:12:22.909 --rc geninfo_all_blocks=1 00:12:22.909 --rc geninfo_unexecuted_blocks=1 00:12:22.909 00:12:22.909 ' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.909 --rc genhtml_branch_coverage=1 00:12:22.909 --rc genhtml_function_coverage=1 00:12:22.909 --rc genhtml_legend=1 00:12:22.909 --rc geninfo_all_blocks=1 00:12:22.909 --rc geninfo_unexecuted_blocks=1 00:12:22.909 00:12:22.909 ' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.909 --rc genhtml_branch_coverage=1 00:12:22.909 --rc genhtml_function_coverage=1 00:12:22.909 --rc genhtml_legend=1 00:12:22.909 --rc geninfo_all_blocks=1 00:12:22.909 --rc geninfo_unexecuted_blocks=1 00:12:22.909 00:12:22.909 ' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.909 --rc genhtml_branch_coverage=1 00:12:22.909 --rc genhtml_function_coverage=1 00:12:22.909 --rc genhtml_legend=1 00:12:22.909 --rc geninfo_all_blocks=1 00:12:22.909 --rc geninfo_unexecuted_blocks=1 00:12:22.909 00:12:22.909 ' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:22.909 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:22.910 #define SPDK_CONFIG_H 00:12:22.910 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:22.910 #define SPDK_CONFIG_APPS 1 00:12:22.910 #define SPDK_CONFIG_ARCH native 00:12:22.910 #undef SPDK_CONFIG_ASAN 00:12:22.910 #undef SPDK_CONFIG_AVAHI 00:12:22.910 #undef SPDK_CONFIG_CET 00:12:22.910 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:22.910 #define SPDK_CONFIG_COVERAGE 1 00:12:22.910 #define SPDK_CONFIG_CROSS_PREFIX 00:12:22.910 #undef SPDK_CONFIG_CRYPTO 00:12:22.910 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:22.910 #undef SPDK_CONFIG_CUSTOMOCF 00:12:22.910 #undef SPDK_CONFIG_DAOS 00:12:22.910 #define SPDK_CONFIG_DAOS_DIR 00:12:22.910 #define SPDK_CONFIG_DEBUG 1 00:12:22.910 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:22.910 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:22.910 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:22.910 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:22.910 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:22.910 #undef SPDK_CONFIG_DPDK_UADK 00:12:22.910 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:22.910 #define SPDK_CONFIG_EXAMPLES 1 00:12:22.910 #undef SPDK_CONFIG_FC 00:12:22.910 #define SPDK_CONFIG_FC_PATH 00:12:22.910 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:22.910 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:22.910 #define SPDK_CONFIG_FSDEV 1 00:12:22.910 #undef SPDK_CONFIG_FUSE 00:12:22.910 #undef SPDK_CONFIG_FUZZER 00:12:22.910 #define SPDK_CONFIG_FUZZER_LIB 00:12:22.910 #undef SPDK_CONFIG_GOLANG 00:12:22.910 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:22.910 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:22.910 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:22.910 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:22.910 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:22.910 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:22.910 #undef SPDK_CONFIG_HAVE_LZ4 00:12:22.910 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:22.910 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:22.910 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:22.910 #define SPDK_CONFIG_IDXD 1 00:12:22.910 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:22.910 #undef SPDK_CONFIG_IPSEC_MB 00:12:22.910 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:22.910 #define SPDK_CONFIG_ISAL 1 00:12:22.910 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:22.910 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:22.910 #define SPDK_CONFIG_LIBDIR 00:12:22.910 #undef SPDK_CONFIG_LTO 00:12:22.910 #define SPDK_CONFIG_MAX_LCORES 128 00:12:22.910 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:22.910 #define SPDK_CONFIG_NVME_CUSE 1 00:12:22.910 #undef SPDK_CONFIG_OCF 00:12:22.910 #define SPDK_CONFIG_OCF_PATH 00:12:22.910 #define SPDK_CONFIG_OPENSSL_PATH 00:12:22.910 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:22.910 #define SPDK_CONFIG_PGO_DIR 00:12:22.910 #undef SPDK_CONFIG_PGO_USE 00:12:22.910 #define SPDK_CONFIG_PREFIX /usr/local 00:12:22.910 #undef SPDK_CONFIG_RAID5F 00:12:22.910 #undef SPDK_CONFIG_RBD 00:12:22.910 #define SPDK_CONFIG_RDMA 1 00:12:22.910 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:22.910 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:22.910 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:22.910 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:22.910 #define SPDK_CONFIG_SHARED 1 00:12:22.910 #undef SPDK_CONFIG_SMA 00:12:22.910 #define SPDK_CONFIG_TESTS 1 00:12:22.910 #undef SPDK_CONFIG_TSAN 00:12:22.910 #define SPDK_CONFIG_UBLK 1 00:12:22.910 #define SPDK_CONFIG_UBSAN 1 00:12:22.910 #undef SPDK_CONFIG_UNIT_TESTS 00:12:22.910 #undef SPDK_CONFIG_URING 00:12:22.910 #define SPDK_CONFIG_URING_PATH 00:12:22.910 #undef SPDK_CONFIG_URING_ZNS 00:12:22.910 #undef SPDK_CONFIG_USDT 00:12:22.910 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:22.910 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:22.910 #define SPDK_CONFIG_VFIO_USER 1 00:12:22.910 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:22.910 #define SPDK_CONFIG_VHOST 1 00:12:22.910 #define SPDK_CONFIG_VIRTIO 1 00:12:22.910 #undef SPDK_CONFIG_VTUNE 00:12:22.910 #define SPDK_CONFIG_VTUNE_DIR 00:12:22.910 #define SPDK_CONFIG_WERROR 1 00:12:22.910 #define SPDK_CONFIG_WPDK_DIR 00:12:22.910 #undef SPDK_CONFIG_XNVME 00:12:22.910 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.910 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:22.911 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:22.912 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 31791 ]] 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 31791 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.filz2V 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.filz2V/tests/target /tmp/spdk.filz2V 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:22.913 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93330649088 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837199872 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7506550784 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50408566784 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418597888 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144234496 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23207936 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50418286592 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:22.914 * Looking for test storage... 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=93330649088 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9721143296 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:22.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.914 --rc genhtml_branch_coverage=1 00:12:22.914 --rc genhtml_function_coverage=1 00:12:22.914 --rc genhtml_legend=1 00:12:22.914 --rc geninfo_all_blocks=1 00:12:22.914 --rc geninfo_unexecuted_blocks=1 00:12:22.914 00:12:22.914 ' 00:12:22.914 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:22.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.914 --rc genhtml_branch_coverage=1 00:12:22.914 --rc genhtml_function_coverage=1 00:12:22.914 --rc genhtml_legend=1 00:12:22.914 --rc geninfo_all_blocks=1 00:12:22.914 --rc geninfo_unexecuted_blocks=1 00:12:22.914 00:12:22.914 ' 00:12:22.915 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:22.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.915 --rc genhtml_branch_coverage=1 00:12:22.915 --rc genhtml_function_coverage=1 00:12:22.915 --rc genhtml_legend=1 00:12:22.915 --rc geninfo_all_blocks=1 00:12:22.915 --rc geninfo_unexecuted_blocks=1 00:12:22.915 00:12:22.915 ' 00:12:22.915 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:22.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.915 --rc genhtml_branch_coverage=1 00:12:22.915 --rc genhtml_function_coverage=1 00:12:22.915 --rc genhtml_legend=1 00:12:22.915 --rc geninfo_all_blocks=1 00:12:22.915 --rc geninfo_unexecuted_blocks=1 00:12:22.915 00:12:22.915 ' 00:12:22.915 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.915 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.175 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:29.898 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:29.898 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:29.898 Found net devices under 0000:af:00.0: cvl_0_0 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:29.898 Found net devices under 0000:af:00.1: cvl_0_1 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.898 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:12:29.899 00:12:29.899 --- 10.0.0.2 ping statistics --- 00:12:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.899 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:12:29.899 00:12:29.899 --- 10.0.0.1 ping statistics --- 00:12:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.899 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.899 ************************************ 00:12:29.899 START TEST nvmf_filesystem_no_in_capsule 00:12:29.899 ************************************ 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=35322 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 35322 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 35322 ']' 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.899 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.899 [2024-12-10 05:37:47.805019] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:12:29.899 [2024-12-10 05:37:47.805066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.158 [2024-12-10 05:37:47.891716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.158 [2024-12-10 05:37:47.930810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.158 [2024-12-10 05:37:47.930850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.158 [2024-12-10 05:37:47.930856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.158 [2024-12-10 05:37:47.930861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.158 [2024-12-10 05:37:47.930866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.158 [2024-12-10 05:37:47.932430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.158 [2024-12-10 05:37:47.932541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.158 [2024-12-10 05:37:47.932627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.158 [2024-12-10 05:37:47.932629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:30.725 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.726 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 [2024-12-10 05:37:48.683928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 Malloc1 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 [2024-12-10 05:37:48.834393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:30.985 { 00:12:30.985 "name": "Malloc1", 00:12:30.985 "aliases": [ 00:12:30.985 "62b67d9b-4533-4903-ba94-221f83e7328e" 00:12:30.985 ], 00:12:30.985 "product_name": "Malloc disk", 00:12:30.985 "block_size": 512, 00:12:30.985 "num_blocks": 1048576, 00:12:30.985 "uuid": "62b67d9b-4533-4903-ba94-221f83e7328e", 00:12:30.985 "assigned_rate_limits": { 00:12:30.985 "rw_ios_per_sec": 0, 00:12:30.985 "rw_mbytes_per_sec": 0, 00:12:30.985 "r_mbytes_per_sec": 0, 00:12:30.985 "w_mbytes_per_sec": 0 00:12:30.985 }, 00:12:30.985 "claimed": true, 00:12:30.985 "claim_type": "exclusive_write", 00:12:30.985 "zoned": false, 00:12:30.985 "supported_io_types": { 00:12:30.985 "read": true, 00:12:30.985 "write": true, 00:12:30.985 "unmap": true, 00:12:30.985 "flush": true, 00:12:30.985 "reset": true, 00:12:30.985 "nvme_admin": false, 00:12:30.985 "nvme_io": false, 00:12:30.985 "nvme_io_md": false, 00:12:30.985 "write_zeroes": true, 00:12:30.985 "zcopy": true, 00:12:30.985 "get_zone_info": false, 00:12:30.985 "zone_management": false, 00:12:30.985 "zone_append": false, 00:12:30.985 "compare": false, 00:12:30.985 "compare_and_write": false, 00:12:30.985 "abort": true, 00:12:30.985 "seek_hole": false, 00:12:30.985 "seek_data": false, 00:12:30.985 "copy": true, 00:12:30.985 "nvme_iov_md": false 00:12:30.985 }, 00:12:30.985 "memory_domains": [ 00:12:30.985 { 00:12:30.985 "dma_device_id": "system", 00:12:30.985 "dma_device_type": 1 00:12:30.985 }, 00:12:30.985 { 00:12:30.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.985 "dma_device_type": 2 00:12:30.985 } 00:12:30.985 ], 00:12:30.985 "driver_specific": {} 00:12:30.985 } 00:12:30.985 ]' 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:30.985 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:31.244 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:31.244 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:31.244 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:31.244 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:31.244 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.179 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.179 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.179 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.179 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.179 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:34.711 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:35.278 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.214 ************************************ 00:12:36.214 START TEST filesystem_ext4 00:12:36.214 ************************************ 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:36.214 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:36.214 mke2fs 1.47.0 (5-Feb-2023) 00:12:36.474 Discarding device blocks: 0/522240 done 00:12:36.474 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:36.474 Filesystem UUID: 03ca0508-d2b2-4e17-bc01-a53034e4b65f 00:12:36.474 Superblock backups stored on blocks: 00:12:36.474 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:36.474 00:12:36.474 Allocating group tables: 0/64 done 00:12:36.474 Writing inode tables: 0/64 done 00:12:36.474 Creating journal (8192 blocks): done 00:12:36.474 Writing superblocks and filesystem accounting information: 0/64 done 00:12:36.474 00:12:36.474 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:36.474 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 35322 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.051 00:12:43.051 real 0m5.706s 00:12:43.051 user 0m0.032s 00:12:43.051 sys 0m0.065s 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:43.051 ************************************ 00:12:43.051 END TEST filesystem_ext4 00:12:43.051 ************************************ 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.051 ************************************ 00:12:43.051 START TEST filesystem_btrfs 00:12:43.051 ************************************ 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:43.051 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:43.051 btrfs-progs v6.8.1 00:12:43.051 See https://btrfs.readthedocs.io for more information. 00:12:43.051 00:12:43.051 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:43.051 NOTE: several default settings have changed in version 5.15, please make sure 00:12:43.051 this does not affect your deployments: 00:12:43.051 - DUP for metadata (-m dup) 00:12:43.051 - enabled no-holes (-O no-holes) 00:12:43.051 - enabled free-space-tree (-R free-space-tree) 00:12:43.051 00:12:43.051 Label: (null) 00:12:43.051 UUID: c9137937-2fdd-4c63-ab8e-7147cd329500 00:12:43.051 Node size: 16384 00:12:43.051 Sector size: 4096 (CPU page size: 4096) 00:12:43.051 Filesystem size: 510.00MiB 00:12:43.051 Block group profiles: 00:12:43.051 Data: single 8.00MiB 00:12:43.051 Metadata: DUP 32.00MiB 00:12:43.051 System: DUP 8.00MiB 00:12:43.051 SSD detected: yes 00:12:43.051 Zoned device: no 00:12:43.051 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:43.051 Checksum: crc32c 00:12:43.051 Number of devices: 1 00:12:43.051 Devices: 00:12:43.051 ID SIZE PATH 00:12:43.051 1 510.00MiB /dev/nvme0n1p1 00:12:43.051 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:43.051 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 35322 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.052 00:12:43.052 real 0m0.907s 00:12:43.052 user 0m0.034s 00:12:43.052 sys 0m0.103s 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:43.052 ************************************ 00:12:43.052 END TEST filesystem_btrfs 00:12:43.052 ************************************ 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.052 ************************************ 00:12:43.052 START TEST filesystem_xfs 00:12:43.052 ************************************ 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:43.052 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:43.052 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:43.052 = sectsz=512 attr=2, projid32bit=1 00:12:43.052 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:43.052 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:43.052 data = bsize=4096 blocks=130560, imaxpct=25 00:12:43.052 = sunit=0 swidth=0 blks 00:12:43.052 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:43.052 log =internal log bsize=4096 blocks=16384, version=2 00:12:43.052 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:43.052 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:44.428 Discarding blocks...Done. 00:12:44.428 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:44.428 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:45.804 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:45.805 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:45.805 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:45.805 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:45.805 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:45.805 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 35322 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:46.064 00:12:46.064 real 0m2.884s 00:12:46.064 user 0m0.021s 00:12:46.064 sys 0m0.078s 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:46.064 ************************************ 00:12:46.064 END TEST filesystem_xfs 00:12:46.064 ************************************ 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:46.064 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 35322 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 35322 ']' 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 35322 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 35322 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 35322' 00:12:46.323 killing process with pid 35322 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 35322 00:12:46.323 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 35322 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:46.582 00:12:46.582 real 0m16.713s 00:12:46.582 user 1m5.823s 00:12:46.582 sys 0m1.448s 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.582 ************************************ 00:12:46.582 END TEST nvmf_filesystem_no_in_capsule 00:12:46.582 ************************************ 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:46.582 ************************************ 00:12:46.582 START TEST nvmf_filesystem_in_capsule 00:12:46.582 ************************************ 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.582 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=38276 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 38276 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 38276 ']' 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.841 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.841 [2024-12-10 05:38:04.589581] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:12:46.841 [2024-12-10 05:38:04.589619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.841 [2024-12-10 05:38:04.674452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.841 [2024-12-10 05:38:04.714586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.841 [2024-12-10 05:38:04.714625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.841 [2024-12-10 05:38:04.714632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.841 [2024-12-10 05:38:04.714638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.841 [2024-12-10 05:38:04.714643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.841 [2024-12-10 05:38:04.716070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.841 [2024-12-10 05:38:04.716188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.841 [2024-12-10 05:38:04.716296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.841 [2024-12-10 05:38:04.716298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.776 [2024-12-10 05:38:05.471978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.776 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 Malloc1 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 [2024-12-10 05:38:05.634401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:47.777 { 00:12:47.777 "name": "Malloc1", 00:12:47.777 "aliases": [ 00:12:47.777 "a5dbff38-af28-40bc-abab-f7f84619ccfc" 00:12:47.777 ], 00:12:47.777 "product_name": "Malloc disk", 00:12:47.777 "block_size": 512, 00:12:47.777 "num_blocks": 1048576, 00:12:47.777 "uuid": "a5dbff38-af28-40bc-abab-f7f84619ccfc", 00:12:47.777 "assigned_rate_limits": { 00:12:47.777 "rw_ios_per_sec": 0, 00:12:47.777 "rw_mbytes_per_sec": 0, 00:12:47.777 "r_mbytes_per_sec": 0, 00:12:47.777 "w_mbytes_per_sec": 0 00:12:47.777 }, 00:12:47.777 "claimed": true, 00:12:47.777 "claim_type": "exclusive_write", 00:12:47.777 "zoned": false, 00:12:47.777 "supported_io_types": { 00:12:47.777 "read": true, 00:12:47.777 "write": true, 00:12:47.777 "unmap": true, 00:12:47.777 "flush": true, 00:12:47.777 "reset": true, 00:12:47.777 "nvme_admin": false, 00:12:47.777 "nvme_io": false, 00:12:47.777 "nvme_io_md": false, 00:12:47.777 "write_zeroes": true, 00:12:47.777 "zcopy": true, 00:12:47.777 "get_zone_info": false, 00:12:47.777 "zone_management": false, 00:12:47.777 "zone_append": false, 00:12:47.777 "compare": false, 00:12:47.777 "compare_and_write": false, 00:12:47.777 "abort": true, 00:12:47.777 "seek_hole": false, 00:12:47.777 "seek_data": false, 00:12:47.777 "copy": true, 00:12:47.777 "nvme_iov_md": false 00:12:47.777 }, 00:12:47.777 "memory_domains": [ 00:12:47.777 { 00:12:47.777 "dma_device_id": "system", 00:12:47.777 "dma_device_type": 1 00:12:47.777 }, 00:12:47.777 { 00:12:47.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.777 "dma_device_type": 2 00:12:47.777 } 00:12:47.777 ], 00:12:47.777 "driver_specific": {} 00:12:47.777 } 00:12:47.777 ]' 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:47.777 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:48.036 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:48.036 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:48.036 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:48.036 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:48.036 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.972 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.972 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:48.972 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.973 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:48.973 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:51.506 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:51.506 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:51.506 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 ************************************ 00:12:52.443 START TEST filesystem_in_capsule_ext4 00:12:52.443 ************************************ 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:52.443 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:52.702 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:52.702 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:52.702 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:52.702 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:52.702 mke2fs 1.47.0 (5-Feb-2023) 00:12:52.702 Discarding device blocks: 0/522240 done 00:12:52.702 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:52.702 Filesystem UUID: 4d3bdae9-76e0-4f57-8188-feb3326109f6 00:12:52.702 Superblock backups stored on blocks: 00:12:52.702 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:52.702 00:12:52.702 Allocating group tables: 0/64 done 00:12:52.702 Writing inode tables: 0/64 done 00:12:52.702 Creating journal (8192 blocks): done 00:12:55.009 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:12:55.009 00:12:55.009 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:55.009 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 38276 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:01.567 00:13:01.567 real 0m8.153s 00:13:01.567 user 0m0.023s 00:13:01.567 sys 0m0.077s 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:01.567 ************************************ 00:13:01.567 END TEST filesystem_in_capsule_ext4 00:13:01.567 ************************************ 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.567 ************************************ 00:13:01.567 START TEST filesystem_in_capsule_btrfs 00:13:01.567 ************************************ 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:01.567 btrfs-progs v6.8.1 00:13:01.567 See https://btrfs.readthedocs.io for more information. 00:13:01.567 00:13:01.567 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:01.567 NOTE: several default settings have changed in version 5.15, please make sure 00:13:01.567 this does not affect your deployments: 00:13:01.567 - DUP for metadata (-m dup) 00:13:01.567 - enabled no-holes (-O no-holes) 00:13:01.567 - enabled free-space-tree (-R free-space-tree) 00:13:01.567 00:13:01.567 Label: (null) 00:13:01.567 UUID: 88014529-2197-4cf9-a8bf-674a12894c47 00:13:01.567 Node size: 16384 00:13:01.567 Sector size: 4096 (CPU page size: 4096) 00:13:01.567 Filesystem size: 510.00MiB 00:13:01.567 Block group profiles: 00:13:01.567 Data: single 8.00MiB 00:13:01.567 Metadata: DUP 32.00MiB 00:13:01.567 System: DUP 8.00MiB 00:13:01.567 SSD detected: yes 00:13:01.567 Zoned device: no 00:13:01.567 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:01.567 Checksum: crc32c 00:13:01.567 Number of devices: 1 00:13:01.567 Devices: 00:13:01.567 ID SIZE PATH 00:13:01.567 1 510.00MiB /dev/nvme0n1p1 00:13:01.567 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:01.567 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 38276 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:01.567 00:13:01.567 real 0m0.688s 00:13:01.567 user 0m0.026s 00:13:01.567 sys 0m0.111s 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:01.567 ************************************ 00:13:01.567 END TEST filesystem_in_capsule_btrfs 00:13:01.567 ************************************ 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.567 ************************************ 00:13:01.567 START TEST filesystem_in_capsule_xfs 00:13:01.567 ************************************ 00:13:01.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:01.568 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:01.568 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:01.568 = sectsz=512 attr=2, projid32bit=1 00:13:01.568 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:01.568 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:01.568 data = bsize=4096 blocks=130560, imaxpct=25 00:13:01.568 = sunit=0 swidth=0 blks 00:13:01.568 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:01.568 log =internal log bsize=4096 blocks=16384, version=2 00:13:01.568 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:01.568 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:02.501 Discarding blocks...Done. 00:13:02.501 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:02.501 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 38276 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:05.030 00:13:05.030 real 0m3.421s 00:13:05.030 user 0m0.030s 00:13:05.030 sys 0m0.069s 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:05.030 ************************************ 00:13:05.030 END TEST filesystem_in_capsule_xfs 00:13:05.030 ************************************ 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:05.030 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 38276 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 38276 ']' 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 38276 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38276 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38276' 00:13:05.288 killing process with pid 38276 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 38276 00:13:05.288 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 38276 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:05.547 00:13:05.547 real 0m18.890s 00:13:05.547 user 1m14.541s 00:13:05.547 sys 0m1.444s 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.547 ************************************ 00:13:05.547 END TEST nvmf_filesystem_in_capsule 00:13:05.547 ************************************ 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.547 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.547 rmmod nvme_tcp 00:13:05.547 rmmod nvme_fabrics 00:13:05.547 rmmod nvme_keyring 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.806 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.711 00:13:07.711 real 0m45.146s 00:13:07.711 user 2m22.629s 00:13:07.711 sys 0m8.209s 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:07.711 ************************************ 00:13:07.711 END TEST nvmf_filesystem 00:13:07.711 ************************************ 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.711 05:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 ************************************ 00:13:07.970 START TEST nvmf_target_discovery 00:13:07.970 ************************************ 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:07.970 * Looking for test storage... 00:13:07.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:07.970 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.971 --rc genhtml_branch_coverage=1 00:13:07.971 --rc genhtml_function_coverage=1 00:13:07.971 --rc genhtml_legend=1 00:13:07.971 --rc geninfo_all_blocks=1 00:13:07.971 --rc geninfo_unexecuted_blocks=1 00:13:07.971 00:13:07.971 ' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.971 --rc genhtml_branch_coverage=1 00:13:07.971 --rc genhtml_function_coverage=1 00:13:07.971 --rc genhtml_legend=1 00:13:07.971 --rc geninfo_all_blocks=1 00:13:07.971 --rc geninfo_unexecuted_blocks=1 00:13:07.971 00:13:07.971 ' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.971 --rc genhtml_branch_coverage=1 00:13:07.971 --rc genhtml_function_coverage=1 00:13:07.971 --rc genhtml_legend=1 00:13:07.971 --rc geninfo_all_blocks=1 00:13:07.971 --rc geninfo_unexecuted_blocks=1 00:13:07.971 00:13:07.971 ' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.971 --rc genhtml_branch_coverage=1 00:13:07.971 --rc genhtml_function_coverage=1 00:13:07.971 --rc genhtml_legend=1 00:13:07.971 --rc geninfo_all_blocks=1 00:13:07.971 --rc geninfo_unexecuted_blocks=1 00:13:07.971 00:13:07.971 ' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.971 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.539 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:14.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:14.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:14.540 Found net devices under 0000:af:00.0: cvl_0_0 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:14.540 Found net devices under 0000:af:00.1: cvl_0_1 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.540 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:14.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:13:14.799 00:13:14.799 --- 10.0.0.2 ping statistics --- 00:13:14.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.799 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:14.799 00:13:14.799 --- 10.0.0.1 ping statistics --- 00:13:14.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.799 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=45462 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 45462 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 45462 ']' 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.799 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:14.799 [2024-12-10 05:38:32.690714] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:13:14.799 [2024-12-10 05:38:32.690756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.057 [2024-12-10 05:38:32.774067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.057 [2024-12-10 05:38:32.814837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.057 [2024-12-10 05:38:32.814871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.057 [2024-12-10 05:38:32.814878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.057 [2024-12-10 05:38:32.814884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.057 [2024-12-10 05:38:32.814889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.057 [2024-12-10 05:38:32.816366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.057 [2024-12-10 05:38:32.816478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.057 [2024-12-10 05:38:32.816586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.057 [2024-12-10 05:38:32.816587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.057 [2024-12-10 05:38:32.958373] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.057 Null1 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.057 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.057 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.057 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.057 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.057 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 [2024-12-10 05:38:33.022371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 Null2 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 Null3 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 Null4 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.314 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:15.572 00:13:15.572 Discovery Log Number of Records 6, Generation counter 6 00:13:15.572 =====Discovery Log Entry 0====== 00:13:15.572 trtype: tcp 00:13:15.572 adrfam: ipv4 00:13:15.572 subtype: current discovery subsystem 00:13:15.572 treq: not required 00:13:15.572 portid: 0 00:13:15.572 trsvcid: 4420 00:13:15.572 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:15.572 traddr: 10.0.0.2 00:13:15.572 eflags: explicit discovery connections, duplicate discovery information 00:13:15.572 sectype: none 00:13:15.572 =====Discovery Log Entry 1====== 00:13:15.572 trtype: tcp 00:13:15.572 adrfam: ipv4 00:13:15.572 subtype: nvme subsystem 00:13:15.572 treq: not required 00:13:15.572 portid: 0 00:13:15.572 trsvcid: 4420 00:13:15.572 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:15.572 traddr: 10.0.0.2 00:13:15.572 eflags: none 00:13:15.572 sectype: none 00:13:15.572 =====Discovery Log Entry 2====== 00:13:15.572 trtype: tcp 00:13:15.572 adrfam: ipv4 00:13:15.572 subtype: nvme subsystem 00:13:15.572 treq: not required 00:13:15.572 portid: 0 00:13:15.572 trsvcid: 4420 00:13:15.572 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:15.572 traddr: 10.0.0.2 00:13:15.572 eflags: none 00:13:15.572 sectype: none 00:13:15.572 =====Discovery Log Entry 3====== 00:13:15.572 trtype: tcp 00:13:15.572 adrfam: ipv4 00:13:15.572 subtype: nvme subsystem 00:13:15.572 treq: not required 00:13:15.572 portid: 0 00:13:15.572 trsvcid: 4420 00:13:15.572 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:15.572 traddr: 10.0.0.2 00:13:15.572 eflags: none 00:13:15.572 sectype: none 00:13:15.572 =====Discovery Log Entry 4====== 00:13:15.572 trtype: tcp 00:13:15.572 adrfam: ipv4 00:13:15.572 subtype: nvme subsystem 00:13:15.572 treq: not required 00:13:15.572 portid: 0 00:13:15.572 trsvcid: 4420 00:13:15.572 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:15.572 traddr: 10.0.0.2 00:13:15.572 eflags: none 00:13:15.572 sectype: none 00:13:15.572 =====Discovery Log Entry 5====== 00:13:15.572 trtype: tcp 00:13:15.572 adrfam: ipv4 00:13:15.572 subtype: discovery subsystem referral 00:13:15.572 treq: not required 00:13:15.572 portid: 0 00:13:15.572 trsvcid: 4430 00:13:15.572 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:15.572 traddr: 10.0.0.2 00:13:15.572 eflags: none 00:13:15.572 sectype: none 00:13:15.572 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:15.572 Perform nvmf subsystem discovery via RPC 00:13:15.572 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:15.572 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.572 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.572 [ 00:13:15.572 { 00:13:15.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.572 "subtype": "Discovery", 00:13:15.572 "listen_addresses": [ 00:13:15.572 { 00:13:15.572 "trtype": "TCP", 00:13:15.572 "adrfam": "IPv4", 00:13:15.572 "traddr": "10.0.0.2", 00:13:15.572 "trsvcid": "4420" 00:13:15.572 } 00:13:15.572 ], 00:13:15.572 "allow_any_host": true, 00:13:15.572 "hosts": [] 00:13:15.572 }, 00:13:15.572 { 00:13:15.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.572 "subtype": "NVMe", 00:13:15.572 "listen_addresses": [ 00:13:15.572 { 00:13:15.572 "trtype": "TCP", 00:13:15.572 "adrfam": "IPv4", 00:13:15.572 "traddr": "10.0.0.2", 00:13:15.572 "trsvcid": "4420" 00:13:15.572 } 00:13:15.572 ], 00:13:15.572 "allow_any_host": true, 00:13:15.572 "hosts": [], 00:13:15.572 "serial_number": "SPDK00000000000001", 00:13:15.572 "model_number": "SPDK bdev Controller", 00:13:15.572 "max_namespaces": 32, 00:13:15.572 "min_cntlid": 1, 00:13:15.572 "max_cntlid": 65519, 00:13:15.572 "namespaces": [ 00:13:15.572 { 00:13:15.572 "nsid": 1, 00:13:15.572 "bdev_name": "Null1", 00:13:15.572 "name": "Null1", 00:13:15.572 "nguid": "77E52915CF2C4D1AAF9F14AD07FE15F0", 00:13:15.572 "uuid": "77e52915-cf2c-4d1a-af9f-14ad07fe15f0" 00:13:15.572 } 00:13:15.572 ] 00:13:15.572 }, 00:13:15.572 { 00:13:15.572 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:15.572 "subtype": "NVMe", 00:13:15.572 "listen_addresses": [ 00:13:15.572 { 00:13:15.572 "trtype": "TCP", 00:13:15.572 "adrfam": "IPv4", 00:13:15.572 "traddr": "10.0.0.2", 00:13:15.572 "trsvcid": "4420" 00:13:15.572 } 00:13:15.572 ], 00:13:15.572 "allow_any_host": true, 00:13:15.572 "hosts": [], 00:13:15.572 "serial_number": "SPDK00000000000002", 00:13:15.572 "model_number": "SPDK bdev Controller", 00:13:15.572 "max_namespaces": 32, 00:13:15.572 "min_cntlid": 1, 00:13:15.572 "max_cntlid": 65519, 00:13:15.572 "namespaces": [ 00:13:15.572 { 00:13:15.572 "nsid": 1, 00:13:15.572 "bdev_name": "Null2", 00:13:15.572 "name": "Null2", 00:13:15.572 "nguid": "8EA0E13795F44B958C6E4C396BF4EACA", 00:13:15.572 "uuid": "8ea0e137-95f4-4b95-8c6e-4c396bf4eaca" 00:13:15.572 } 00:13:15.572 ] 00:13:15.572 }, 00:13:15.572 { 00:13:15.572 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:15.572 "subtype": "NVMe", 00:13:15.572 "listen_addresses": [ 00:13:15.572 { 00:13:15.572 "trtype": "TCP", 00:13:15.572 "adrfam": "IPv4", 00:13:15.572 "traddr": "10.0.0.2", 00:13:15.572 "trsvcid": "4420" 00:13:15.572 } 00:13:15.572 ], 00:13:15.572 "allow_any_host": true, 00:13:15.572 "hosts": [], 00:13:15.572 "serial_number": "SPDK00000000000003", 00:13:15.572 "model_number": "SPDK bdev Controller", 00:13:15.572 "max_namespaces": 32, 00:13:15.572 "min_cntlid": 1, 00:13:15.572 "max_cntlid": 65519, 00:13:15.572 "namespaces": [ 00:13:15.572 { 00:13:15.572 "nsid": 1, 00:13:15.572 "bdev_name": "Null3", 00:13:15.572 "name": "Null3", 00:13:15.572 "nguid": "F1808F1EEA404EC6A6CAAC07BC8010B0", 00:13:15.572 "uuid": "f1808f1e-ea40-4ec6-a6ca-ac07bc8010b0" 00:13:15.572 } 00:13:15.572 ] 00:13:15.572 }, 00:13:15.572 { 00:13:15.572 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:15.572 "subtype": "NVMe", 00:13:15.572 "listen_addresses": [ 00:13:15.572 { 00:13:15.572 "trtype": "TCP", 00:13:15.572 "adrfam": "IPv4", 00:13:15.572 "traddr": "10.0.0.2", 00:13:15.572 "trsvcid": "4420" 00:13:15.572 } 00:13:15.572 ], 00:13:15.572 "allow_any_host": true, 00:13:15.572 "hosts": [], 00:13:15.572 "serial_number": "SPDK00000000000004", 00:13:15.572 "model_number": "SPDK bdev Controller", 00:13:15.572 "max_namespaces": 32, 00:13:15.572 "min_cntlid": 1, 00:13:15.573 "max_cntlid": 65519, 00:13:15.573 "namespaces": [ 00:13:15.573 { 00:13:15.573 "nsid": 1, 00:13:15.573 "bdev_name": "Null4", 00:13:15.573 "name": "Null4", 00:13:15.573 "nguid": "D25B49FF11314B448EC2685D0752F126", 00:13:15.573 "uuid": "d25b49ff-1131-4b44-8ec2-685d0752f126" 00:13:15.573 } 00:13:15.573 ] 00:13:15.573 } 00:13:15.573 ] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.573 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.573 rmmod nvme_tcp 00:13:15.573 rmmod nvme_fabrics 00:13:15.573 rmmod nvme_keyring 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 45462 ']' 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 45462 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 45462 ']' 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 45462 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 45462 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 45462' 00:13:15.832 killing process with pid 45462 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 45462 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 45462 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.832 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.371 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.371 00:13:18.371 real 0m10.155s 00:13:18.371 user 0m5.835s 00:13:18.371 sys 0m5.445s 00:13:18.371 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.371 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.371 ************************************ 00:13:18.371 END TEST nvmf_target_discovery 00:13:18.371 ************************************ 00:13:18.371 05:38:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:18.371 05:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.371 05:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.372 05:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.372 ************************************ 00:13:18.372 START TEST nvmf_referrals 00:13:18.372 ************************************ 00:13:18.372 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:18.372 * Looking for test storage... 00:13:18.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.372 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.372 --rc genhtml_branch_coverage=1 00:13:18.372 --rc genhtml_function_coverage=1 00:13:18.372 --rc genhtml_legend=1 00:13:18.372 --rc geninfo_all_blocks=1 00:13:18.372 --rc geninfo_unexecuted_blocks=1 00:13:18.372 00:13:18.372 ' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.372 --rc genhtml_branch_coverage=1 00:13:18.372 --rc genhtml_function_coverage=1 00:13:18.372 --rc genhtml_legend=1 00:13:18.372 --rc geninfo_all_blocks=1 00:13:18.372 --rc geninfo_unexecuted_blocks=1 00:13:18.372 00:13:18.372 ' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.372 --rc genhtml_branch_coverage=1 00:13:18.372 --rc genhtml_function_coverage=1 00:13:18.372 --rc genhtml_legend=1 00:13:18.372 --rc geninfo_all_blocks=1 00:13:18.372 --rc geninfo_unexecuted_blocks=1 00:13:18.372 00:13:18.372 ' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.372 --rc genhtml_branch_coverage=1 00:13:18.372 --rc genhtml_function_coverage=1 00:13:18.372 --rc genhtml_legend=1 00:13:18.372 --rc geninfo_all_blocks=1 00:13:18.372 --rc geninfo_unexecuted_blocks=1 00:13:18.372 00:13:18.372 ' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.372 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.373 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.943 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:24.944 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:24.944 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:24.944 Found net devices under 0000:af:00.0: cvl_0_0 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:24.944 Found net devices under 0000:af:00.1: cvl_0_1 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:13:24.944 00:13:24.944 --- 10.0.0.2 ping statistics --- 00:13:24.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.944 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:13:24.944 00:13:24.944 --- 10.0.0.1 ping statistics --- 00:13:24.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.944 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.944 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=49605 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 49605 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 49605 ']' 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.203 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.203 [2024-12-10 05:38:42.956521] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:13:25.203 [2024-12-10 05:38:42.956564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.203 [2024-12-10 05:38:43.040256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.203 [2024-12-10 05:38:43.080536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.203 [2024-12-10 05:38:43.080573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.203 [2024-12-10 05:38:43.080579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.203 [2024-12-10 05:38:43.080585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.203 [2024-12-10 05:38:43.080590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.203 [2024-12-10 05:38:43.082114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.203 [2024-12-10 05:38:43.082240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.203 [2024-12-10 05:38:43.082304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.203 [2024-12-10 05:38:43.082305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.461 [2024-12-10 05:38:43.218908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.461 [2024-12-10 05:38:43.237353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:25.461 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.462 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.720 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.979 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:26.237 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:26.496 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:26.496 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:26.496 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:26.496 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:26.496 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:26.496 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:26.758 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:27.026 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:27.394 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.663 rmmod nvme_tcp 00:13:27.663 rmmod nvme_fabrics 00:13:27.663 rmmod nvme_keyring 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 49605 ']' 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 49605 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 49605 ']' 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 49605 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 49605 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49605' 00:13:27.663 killing process with pid 49605 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 49605 00:13:27.663 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 49605 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.922 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.826 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.826 00:13:29.826 real 0m11.841s 00:13:29.826 user 0m13.087s 00:13:29.826 sys 0m5.801s 00:13:29.826 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.826 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.826 ************************************ 00:13:29.826 END TEST nvmf_referrals 00:13:29.826 ************************************ 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.086 ************************************ 00:13:30.086 START TEST nvmf_connect_disconnect 00:13:30.086 ************************************ 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:30.086 * Looking for test storage... 00:13:30.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:30.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.086 --rc genhtml_branch_coverage=1 00:13:30.086 --rc genhtml_function_coverage=1 00:13:30.086 --rc genhtml_legend=1 00:13:30.086 --rc geninfo_all_blocks=1 00:13:30.086 --rc geninfo_unexecuted_blocks=1 00:13:30.086 00:13:30.086 ' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:30.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.086 --rc genhtml_branch_coverage=1 00:13:30.086 --rc genhtml_function_coverage=1 00:13:30.086 --rc genhtml_legend=1 00:13:30.086 --rc geninfo_all_blocks=1 00:13:30.086 --rc geninfo_unexecuted_blocks=1 00:13:30.086 00:13:30.086 ' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:30.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.086 --rc genhtml_branch_coverage=1 00:13:30.086 --rc genhtml_function_coverage=1 00:13:30.086 --rc genhtml_legend=1 00:13:30.086 --rc geninfo_all_blocks=1 00:13:30.086 --rc geninfo_unexecuted_blocks=1 00:13:30.086 00:13:30.086 ' 00:13:30.086 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:30.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.086 --rc genhtml_branch_coverage=1 00:13:30.086 --rc genhtml_function_coverage=1 00:13:30.086 --rc genhtml_legend=1 00:13:30.086 --rc geninfo_all_blocks=1 00:13:30.086 --rc geninfo_unexecuted_blocks=1 00:13:30.086 00:13:30.086 ' 00:13:30.087 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.087 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.087 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.657 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:36.658 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:36.658 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:36.658 Found net devices under 0000:af:00.0: cvl_0_0 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:36.658 Found net devices under 0000:af:00.1: cvl_0_1 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.658 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:36.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:13:36.918 00:13:36.918 --- 10.0.0.2 ping statistics --- 00:13:36.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.918 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:13:36.918 00:13:36.918 --- 10.0.0.1 ping statistics --- 00:13:36.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.918 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.918 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.177 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=54050 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 54050 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 54050 ']' 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.178 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.178 [2024-12-10 05:38:54.937625] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:13:37.178 [2024-12-10 05:38:54.937669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.178 [2024-12-10 05:38:55.020765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.178 [2024-12-10 05:38:55.061082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.178 [2024-12-10 05:38:55.061119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.178 [2024-12-10 05:38:55.061126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.178 [2024-12-10 05:38:55.061132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.178 [2024-12-10 05:38:55.061137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.178 [2024-12-10 05:38:55.062517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.178 [2024-12-10 05:38:55.062614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.178 [2024-12-10 05:38:55.062719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.178 [2024-12-10 05:38:55.062720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.436 [2024-12-10 05:38:55.203382] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.436 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.437 [2024-12-10 05:38:55.269268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:37.437 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:40.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.847 rmmod nvme_tcp 00:13:53.847 rmmod nvme_fabrics 00:13:53.847 rmmod nvme_keyring 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 54050 ']' 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 54050 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 54050 ']' 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 54050 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 54050 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 54050' 00:13:53.847 killing process with pid 54050 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 54050 00:13:53.847 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 54050 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.107 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.014 00:13:56.014 real 0m26.091s 00:13:56.014 user 1m8.505s 00:13:56.014 sys 0m6.542s 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.014 ************************************ 00:13:56.014 END TEST nvmf_connect_disconnect 00:13:56.014 ************************************ 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.014 05:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.274 ************************************ 00:13:56.274 START TEST nvmf_multitarget 00:13:56.274 ************************************ 00:13:56.274 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:56.274 * Looking for test storage... 00:13:56.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:56.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.274 --rc genhtml_branch_coverage=1 00:13:56.274 --rc genhtml_function_coverage=1 00:13:56.274 --rc genhtml_legend=1 00:13:56.274 --rc geninfo_all_blocks=1 00:13:56.274 --rc geninfo_unexecuted_blocks=1 00:13:56.274 00:13:56.274 ' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:56.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.274 --rc genhtml_branch_coverage=1 00:13:56.274 --rc genhtml_function_coverage=1 00:13:56.274 --rc genhtml_legend=1 00:13:56.274 --rc geninfo_all_blocks=1 00:13:56.274 --rc geninfo_unexecuted_blocks=1 00:13:56.274 00:13:56.274 ' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:56.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.274 --rc genhtml_branch_coverage=1 00:13:56.274 --rc genhtml_function_coverage=1 00:13:56.274 --rc genhtml_legend=1 00:13:56.274 --rc geninfo_all_blocks=1 00:13:56.274 --rc geninfo_unexecuted_blocks=1 00:13:56.274 00:13:56.274 ' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:56.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.274 --rc genhtml_branch_coverage=1 00:13:56.274 --rc genhtml_function_coverage=1 00:13:56.274 --rc genhtml_legend=1 00:13:56.274 --rc geninfo_all_blocks=1 00:13:56.274 --rc geninfo_unexecuted_blocks=1 00:13:56.274 00:13:56.274 ' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.274 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.275 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:02.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:02.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.844 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:02.845 Found net devices under 0000:af:00.0: cvl_0_0 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:02.845 Found net devices under 0000:af:00.1: cvl_0_1 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.845 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.103 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.103 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.103 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:14:03.104 00:14:03.104 --- 10.0.0.2 ping statistics --- 00:14:03.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.104 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:03.104 00:14:03.104 --- 10.0.0.1 ping statistics --- 00:14:03.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.104 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=61391 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 61391 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 61391 ']' 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.104 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.104 [2024-12-10 05:39:21.029902] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:14:03.104 [2024-12-10 05:39:21.029944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.362 [2024-12-10 05:39:21.111838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.362 [2024-12-10 05:39:21.151041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.362 [2024-12-10 05:39:21.151082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.362 [2024-12-10 05:39:21.151092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.362 [2024-12-10 05:39:21.151100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.362 [2024-12-10 05:39:21.151107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.362 [2024-12-10 05:39:21.152625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.362 [2024-12-10 05:39:21.152737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.362 [2024-12-10 05:39:21.152843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.362 [2024-12-10 05:39:21.152844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.927 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.927 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:03.927 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.927 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.927 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:04.184 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.184 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:04.184 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:04.184 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:04.184 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:04.184 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:04.184 "nvmf_tgt_1" 00:14:04.184 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:04.442 "nvmf_tgt_2" 00:14:04.442 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:04.442 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:04.442 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:04.442 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:04.699 true 00:14:04.699 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:04.699 true 00:14:04.699 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:04.699 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.958 rmmod nvme_tcp 00:14:04.958 rmmod nvme_fabrics 00:14:04.958 rmmod nvme_keyring 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 61391 ']' 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 61391 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 61391 ']' 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 61391 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61391 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61391' 00:14:04.958 killing process with pid 61391 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 61391 00:14:04.958 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 61391 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.217 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.122 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:07.122 00:14:07.122 real 0m11.054s 00:14:07.122 user 0m10.106s 00:14:07.122 sys 0m5.538s 00:14:07.122 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.122 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:07.122 ************************************ 00:14:07.122 END TEST nvmf_multitarget 00:14:07.122 ************************************ 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.380 ************************************ 00:14:07.380 START TEST nvmf_rpc 00:14:07.380 ************************************ 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:07.380 * Looking for test storage... 00:14:07.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.380 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:07.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.381 --rc genhtml_branch_coverage=1 00:14:07.381 --rc genhtml_function_coverage=1 00:14:07.381 --rc genhtml_legend=1 00:14:07.381 --rc geninfo_all_blocks=1 00:14:07.381 --rc geninfo_unexecuted_blocks=1 00:14:07.381 00:14:07.381 ' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:07.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.381 --rc genhtml_branch_coverage=1 00:14:07.381 --rc genhtml_function_coverage=1 00:14:07.381 --rc genhtml_legend=1 00:14:07.381 --rc geninfo_all_blocks=1 00:14:07.381 --rc geninfo_unexecuted_blocks=1 00:14:07.381 00:14:07.381 ' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:07.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.381 --rc genhtml_branch_coverage=1 00:14:07.381 --rc genhtml_function_coverage=1 00:14:07.381 --rc genhtml_legend=1 00:14:07.381 --rc geninfo_all_blocks=1 00:14:07.381 --rc geninfo_unexecuted_blocks=1 00:14:07.381 00:14:07.381 ' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:07.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.381 --rc genhtml_branch_coverage=1 00:14:07.381 --rc genhtml_function_coverage=1 00:14:07.381 --rc genhtml_legend=1 00:14:07.381 --rc geninfo_all_blocks=1 00:14:07.381 --rc geninfo_unexecuted_blocks=1 00:14:07.381 00:14:07.381 ' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:07.381 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:13.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:13.951 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:13.951 Found net devices under 0000:af:00.0: cvl_0_0 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:13.951 Found net devices under 0000:af:00.1: cvl_0_1 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:13.951 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.952 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.210 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.210 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.210 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.210 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:14:14.210 00:14:14.210 --- 10.0.0.2 ping statistics --- 00:14:14.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.210 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:14.210 00:14:14.210 --- 10.0.0.1 ping statistics --- 00:14:14.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.210 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.210 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=65522 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 65522 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 65522 ']' 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.211 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.211 [2024-12-10 05:39:32.139189] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:14:14.211 [2024-12-10 05:39:32.139247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.469 [2024-12-10 05:39:32.210206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.469 [2024-12-10 05:39:32.251866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.469 [2024-12-10 05:39:32.251902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.469 [2024-12-10 05:39:32.251909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.469 [2024-12-10 05:39:32.251914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.469 [2024-12-10 05:39:32.251919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.469 [2024-12-10 05:39:32.253473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.469 [2024-12-10 05:39:32.253586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.469 [2024-12-10 05:39:32.253693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.469 [2024-12-10 05:39:32.253694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:14.469 "tick_rate": 2100000000, 00:14:14.469 "poll_groups": [ 00:14:14.469 { 00:14:14.469 "name": "nvmf_tgt_poll_group_000", 00:14:14.469 "admin_qpairs": 0, 00:14:14.469 "io_qpairs": 0, 00:14:14.469 "current_admin_qpairs": 0, 00:14:14.469 "current_io_qpairs": 0, 00:14:14.469 "pending_bdev_io": 0, 00:14:14.469 "completed_nvme_io": 0, 00:14:14.469 "transports": [] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "name": "nvmf_tgt_poll_group_001", 00:14:14.469 "admin_qpairs": 0, 00:14:14.469 "io_qpairs": 0, 00:14:14.469 "current_admin_qpairs": 0, 00:14:14.469 "current_io_qpairs": 0, 00:14:14.469 "pending_bdev_io": 0, 00:14:14.469 "completed_nvme_io": 0, 00:14:14.469 "transports": [] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "name": "nvmf_tgt_poll_group_002", 00:14:14.469 "admin_qpairs": 0, 00:14:14.469 "io_qpairs": 0, 00:14:14.469 "current_admin_qpairs": 0, 00:14:14.469 "current_io_qpairs": 0, 00:14:14.469 "pending_bdev_io": 0, 00:14:14.469 "completed_nvme_io": 0, 00:14:14.469 "transports": [] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "name": "nvmf_tgt_poll_group_003", 00:14:14.469 "admin_qpairs": 0, 00:14:14.469 "io_qpairs": 0, 00:14:14.469 "current_admin_qpairs": 0, 00:14:14.469 "current_io_qpairs": 0, 00:14:14.469 "pending_bdev_io": 0, 00:14:14.469 "completed_nvme_io": 0, 00:14:14.469 "transports": [] 00:14:14.469 } 00:14:14.469 ] 00:14:14.469 }' 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:14.469 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.727 [2024-12-10 05:39:32.499737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.727 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:14.727 "tick_rate": 2100000000, 00:14:14.728 "poll_groups": [ 00:14:14.728 { 00:14:14.728 "name": "nvmf_tgt_poll_group_000", 00:14:14.728 "admin_qpairs": 0, 00:14:14.728 "io_qpairs": 0, 00:14:14.728 "current_admin_qpairs": 0, 00:14:14.728 "current_io_qpairs": 0, 00:14:14.728 "pending_bdev_io": 0, 00:14:14.728 "completed_nvme_io": 0, 00:14:14.728 "transports": [ 00:14:14.728 { 00:14:14.728 "trtype": "TCP" 00:14:14.728 } 00:14:14.728 ] 00:14:14.728 }, 00:14:14.728 { 00:14:14.728 "name": "nvmf_tgt_poll_group_001", 00:14:14.728 "admin_qpairs": 0, 00:14:14.728 "io_qpairs": 0, 00:14:14.728 "current_admin_qpairs": 0, 00:14:14.728 "current_io_qpairs": 0, 00:14:14.728 "pending_bdev_io": 0, 00:14:14.728 "completed_nvme_io": 0, 00:14:14.728 "transports": [ 00:14:14.728 { 00:14:14.728 "trtype": "TCP" 00:14:14.728 } 00:14:14.728 ] 00:14:14.728 }, 00:14:14.728 { 00:14:14.728 "name": "nvmf_tgt_poll_group_002", 00:14:14.728 "admin_qpairs": 0, 00:14:14.728 "io_qpairs": 0, 00:14:14.728 "current_admin_qpairs": 0, 00:14:14.728 "current_io_qpairs": 0, 00:14:14.728 "pending_bdev_io": 0, 00:14:14.728 "completed_nvme_io": 0, 00:14:14.728 "transports": [ 00:14:14.728 { 00:14:14.728 "trtype": "TCP" 00:14:14.728 } 00:14:14.728 ] 00:14:14.728 }, 00:14:14.728 { 00:14:14.728 "name": "nvmf_tgt_poll_group_003", 00:14:14.728 "admin_qpairs": 0, 00:14:14.728 "io_qpairs": 0, 00:14:14.728 "current_admin_qpairs": 0, 00:14:14.728 "current_io_qpairs": 0, 00:14:14.728 "pending_bdev_io": 0, 00:14:14.728 "completed_nvme_io": 0, 00:14:14.728 "transports": [ 00:14:14.728 { 00:14:14.728 "trtype": "TCP" 00:14:14.728 } 00:14:14.728 ] 00:14:14.728 } 00:14:14.728 ] 00:14:14.728 }' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.728 Malloc1 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.728 [2024-12-10 05:39:32.674874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:14.728 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:14.986 [2024-12-10 05:39:32.713409] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:14:14.986 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:14.986 could not add new controller: failed to write to nvme-fabrics device 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.986 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.359 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.359 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.359 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.359 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:16.359 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:18.256 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.256 [2024-12-10 05:39:36.097505] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:14:18.256 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:18.256 could not add new controller: failed to write to nvme-fabrics device 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.256 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.628 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.628 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.628 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.628 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.628 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.525 [2024-12-10 05:39:39.436298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.525 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.526 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.897 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.897 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.897 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.897 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:22.897 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.796 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.054 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.055 [2024-12-10 05:39:42.793308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.055 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.988 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.988 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:25.988 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.988 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:25.988 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:28.512 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.512 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 [2024-12-10 05:39:46.083141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.513 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:29.582 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:29.582 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:29.582 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.582 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:29.582 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.480 [2024-12-10 05:39:49.396318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.480 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.852 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.852 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:32.852 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.852 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:32.852 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.750 [2024-12-10 05:39:52.671067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.750 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.123 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.123 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:36.123 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.123 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:36.123 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.020 [2024-12-10 05:39:55.954383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.020 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.278 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 [2024-12-10 05:39:56.006542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 [2024-12-10 05:39:56.054664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 [2024-12-10 05:39:56.102826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 [2024-12-10 05:39:56.154997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:38.279 "tick_rate": 2100000000, 00:14:38.279 "poll_groups": [ 00:14:38.279 { 00:14:38.279 "name": "nvmf_tgt_poll_group_000", 00:14:38.279 "admin_qpairs": 2, 00:14:38.279 "io_qpairs": 168, 00:14:38.279 "current_admin_qpairs": 0, 00:14:38.279 "current_io_qpairs": 0, 00:14:38.279 "pending_bdev_io": 0, 00:14:38.279 "completed_nvme_io": 266, 00:14:38.279 "transports": [ 00:14:38.279 { 00:14:38.279 "trtype": "TCP" 00:14:38.279 } 00:14:38.279 ] 00:14:38.279 }, 00:14:38.279 { 00:14:38.279 "name": "nvmf_tgt_poll_group_001", 00:14:38.279 "admin_qpairs": 2, 00:14:38.279 "io_qpairs": 168, 00:14:38.279 "current_admin_qpairs": 0, 00:14:38.279 "current_io_qpairs": 0, 00:14:38.279 "pending_bdev_io": 0, 00:14:38.279 "completed_nvme_io": 267, 00:14:38.279 "transports": [ 00:14:38.279 { 00:14:38.279 "trtype": "TCP" 00:14:38.280 } 00:14:38.280 ] 00:14:38.280 }, 00:14:38.280 { 00:14:38.280 "name": "nvmf_tgt_poll_group_002", 00:14:38.280 "admin_qpairs": 1, 00:14:38.280 "io_qpairs": 168, 00:14:38.280 "current_admin_qpairs": 0, 00:14:38.280 "current_io_qpairs": 0, 00:14:38.280 "pending_bdev_io": 0, 00:14:38.280 "completed_nvme_io": 236, 00:14:38.280 "transports": [ 00:14:38.280 { 00:14:38.280 "trtype": "TCP" 00:14:38.280 } 00:14:38.280 ] 00:14:38.280 }, 00:14:38.280 { 00:14:38.280 "name": "nvmf_tgt_poll_group_003", 00:14:38.280 "admin_qpairs": 2, 00:14:38.280 "io_qpairs": 168, 00:14:38.280 "current_admin_qpairs": 0, 00:14:38.280 "current_io_qpairs": 0, 00:14:38.280 "pending_bdev_io": 0, 00:14:38.280 "completed_nvme_io": 253, 00:14:38.280 "transports": [ 00:14:38.280 { 00:14:38.280 "trtype": "TCP" 00:14:38.280 } 00:14:38.280 ] 00:14:38.280 } 00:14:38.280 ] 00:14:38.280 }' 00:14:38.280 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:38.280 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:38.280 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:38.280 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.538 rmmod nvme_tcp 00:14:38.538 rmmod nvme_fabrics 00:14:38.538 rmmod nvme_keyring 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 65522 ']' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 65522 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 65522 ']' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 65522 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65522 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65522' 00:14:38.538 killing process with pid 65522 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 65522 00:14:38.538 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 65522 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.797 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.333 00:14:41.333 real 0m33.557s 00:14:41.333 user 1m38.978s 00:14:41.333 sys 0m7.055s 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.333 ************************************ 00:14:41.333 END TEST nvmf_rpc 00:14:41.333 ************************************ 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.333 ************************************ 00:14:41.333 START TEST nvmf_invalid 00:14:41.333 ************************************ 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:41.333 * Looking for test storage... 00:14:41.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.333 --rc genhtml_branch_coverage=1 00:14:41.333 --rc genhtml_function_coverage=1 00:14:41.333 --rc genhtml_legend=1 00:14:41.333 --rc geninfo_all_blocks=1 00:14:41.333 --rc geninfo_unexecuted_blocks=1 00:14:41.333 00:14:41.333 ' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.333 --rc genhtml_branch_coverage=1 00:14:41.333 --rc genhtml_function_coverage=1 00:14:41.333 --rc genhtml_legend=1 00:14:41.333 --rc geninfo_all_blocks=1 00:14:41.333 --rc geninfo_unexecuted_blocks=1 00:14:41.333 00:14:41.333 ' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.333 --rc genhtml_branch_coverage=1 00:14:41.333 --rc genhtml_function_coverage=1 00:14:41.333 --rc genhtml_legend=1 00:14:41.333 --rc geninfo_all_blocks=1 00:14:41.333 --rc geninfo_unexecuted_blocks=1 00:14:41.333 00:14:41.333 ' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.333 --rc genhtml_branch_coverage=1 00:14:41.333 --rc genhtml_function_coverage=1 00:14:41.333 --rc genhtml_legend=1 00:14:41.333 --rc geninfo_all_blocks=1 00:14:41.333 --rc geninfo_unexecuted_blocks=1 00:14:41.333 00:14:41.333 ' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.333 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.334 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:47.902 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:47.902 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.902 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:47.903 Found net devices under 0000:af:00.0: cvl_0_0 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:47.903 Found net devices under 0000:af:00.1: cvl_0_1 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:47.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:14:47.903 00:14:47.903 --- 10.0.0.2 ping statistics --- 00:14:47.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.903 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:14:47.903 00:14:47.903 --- 10.0.0.1 ping statistics --- 00:14:47.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.903 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=73544 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 73544 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 73544 ']' 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.903 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:47.903 [2024-12-10 05:40:05.772956] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:14:47.903 [2024-12-10 05:40:05.773007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.160 [2024-12-10 05:40:05.857787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.160 [2024-12-10 05:40:05.900823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.160 [2024-12-10 05:40:05.900862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.160 [2024-12-10 05:40:05.900870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.160 [2024-12-10 05:40:05.900876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.160 [2024-12-10 05:40:05.900881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.160 [2024-12-10 05:40:05.902313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.160 [2024-12-10 05:40:05.902350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.160 [2024-12-10 05:40:05.902477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.160 [2024-12-10 05:40:05.902479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:48.723 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12242 00:14:48.980 [2024-12-10 05:40:06.826986] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:48.980 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:48.980 { 00:14:48.980 "nqn": "nqn.2016-06.io.spdk:cnode12242", 00:14:48.980 "tgt_name": "foobar", 00:14:48.981 "method": "nvmf_create_subsystem", 00:14:48.981 "req_id": 1 00:14:48.981 } 00:14:48.981 Got JSON-RPC error response 00:14:48.981 response: 00:14:48.981 { 00:14:48.981 "code": -32603, 00:14:48.981 "message": "Unable to find target foobar" 00:14:48.981 }' 00:14:48.981 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:48.981 { 00:14:48.981 "nqn": "nqn.2016-06.io.spdk:cnode12242", 00:14:48.981 "tgt_name": "foobar", 00:14:48.981 "method": "nvmf_create_subsystem", 00:14:48.981 "req_id": 1 00:14:48.981 } 00:14:48.981 Got JSON-RPC error response 00:14:48.981 response: 00:14:48.981 { 00:14:48.981 "code": -32603, 00:14:48.981 "message": "Unable to find target foobar" 00:14:48.981 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:48.981 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:48.981 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7442 00:14:49.239 [2024-12-10 05:40:07.035726] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7442: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:49.239 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:49.239 { 00:14:49.239 "nqn": "nqn.2016-06.io.spdk:cnode7442", 00:14:49.239 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:49.239 "method": "nvmf_create_subsystem", 00:14:49.239 "req_id": 1 00:14:49.239 } 00:14:49.239 Got JSON-RPC error response 00:14:49.239 response: 00:14:49.239 { 00:14:49.239 "code": -32602, 00:14:49.239 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:49.239 }' 00:14:49.239 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:49.239 { 00:14:49.239 "nqn": "nqn.2016-06.io.spdk:cnode7442", 00:14:49.239 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:49.239 "method": "nvmf_create_subsystem", 00:14:49.239 "req_id": 1 00:14:49.239 } 00:14:49.239 Got JSON-RPC error response 00:14:49.239 response: 00:14:49.239 { 00:14:49.239 "code": -32602, 00:14:49.239 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:49.239 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:49.239 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:49.239 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24427 00:14:49.497 [2024-12-10 05:40:07.260420] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24427: invalid model number 'SPDK_Controller' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:49.497 { 00:14:49.497 "nqn": "nqn.2016-06.io.spdk:cnode24427", 00:14:49.497 "model_number": "SPDK_Controller\u001f", 00:14:49.497 "method": "nvmf_create_subsystem", 00:14:49.497 "req_id": 1 00:14:49.497 } 00:14:49.497 Got JSON-RPC error response 00:14:49.497 response: 00:14:49.497 { 00:14:49.497 "code": -32602, 00:14:49.497 "message": "Invalid MN SPDK_Controller\u001f" 00:14:49.497 }' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:49.497 { 00:14:49.497 "nqn": "nqn.2016-06.io.spdk:cnode24427", 00:14:49.497 "model_number": "SPDK_Controller\u001f", 00:14:49.497 "method": "nvmf_create_subsystem", 00:14:49.497 "req_id": 1 00:14:49.497 } 00:14:49.497 Got JSON-RPC error response 00:14:49.497 response: 00:14:49.497 { 00:14:49.497 "code": -32602, 00:14:49.497 "message": "Invalid MN SPDK_Controller\u001f" 00:14:49.497 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:49.497 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '_q]o_[eGDXF?f8PshSW]S' 00:14:49.498 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '_q]o_[eGDXF?f8PshSW]S' nqn.2016-06.io.spdk:cnode6798 00:14:49.756 [2024-12-10 05:40:07.601534] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6798: invalid serial number '_q]o_[eGDXF?f8PshSW]S' 00:14:49.756 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:49.756 { 00:14:49.756 "nqn": "nqn.2016-06.io.spdk:cnode6798", 00:14:49.756 "serial_number": "_q]o_[eGDXF?f8PshSW]S", 00:14:49.756 "method": "nvmf_create_subsystem", 00:14:49.756 "req_id": 1 00:14:49.756 } 00:14:49.756 Got JSON-RPC error response 00:14:49.756 response: 00:14:49.756 { 00:14:49.756 "code": -32602, 00:14:49.756 "message": "Invalid SN _q]o_[eGDXF?f8PshSW]S" 00:14:49.756 }' 00:14:49.756 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:49.756 { 00:14:49.756 "nqn": "nqn.2016-06.io.spdk:cnode6798", 00:14:49.756 "serial_number": "_q]o_[eGDXF?f8PshSW]S", 00:14:49.756 "method": "nvmf_create_subsystem", 00:14:49.756 "req_id": 1 00:14:49.756 } 00:14:49.756 Got JSON-RPC error response 00:14:49.756 response: 00:14:49.756 { 00:14:49.756 "code": -32602, 00:14:49.756 "message": "Invalid SN _q]o_[eGDXF?f8PshSW]S" 00:14:49.756 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:49.756 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:49.756 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:49.757 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:50.016 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn' 00:14:50.017 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn' nqn.2016-06.io.spdk:cnode16823 00:14:50.275 [2024-12-10 05:40:08.091144] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16823: invalid model number '1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn' 00:14:50.275 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:50.275 { 00:14:50.275 "nqn": "nqn.2016-06.io.spdk:cnode16823", 00:14:50.275 "model_number": "1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn", 00:14:50.275 "method": "nvmf_create_subsystem", 00:14:50.275 "req_id": 1 00:14:50.275 } 00:14:50.275 Got JSON-RPC error response 00:14:50.275 response: 00:14:50.275 { 00:14:50.275 "code": -32602, 00:14:50.275 "message": "Invalid MN 1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn" 00:14:50.275 }' 00:14:50.275 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:50.275 { 00:14:50.275 "nqn": "nqn.2016-06.io.spdk:cnode16823", 00:14:50.275 "model_number": "1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn", 00:14:50.275 "method": "nvmf_create_subsystem", 00:14:50.275 "req_id": 1 00:14:50.275 } 00:14:50.275 Got JSON-RPC error response 00:14:50.275 response: 00:14:50.275 { 00:14:50.275 "code": -32602, 00:14:50.275 "message": "Invalid MN 1PSISb6tAFZ=$agQdM/_f`|6Aom8:ujX@F.?@uPPn" 00:14:50.275 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:50.275 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:50.532 [2024-12-10 05:40:08.287859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.532 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:50.790 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:50.790 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:50.790 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:50.790 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:50.790 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:50.790 [2024-12-10 05:40:08.726553] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:51.047 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:51.047 { 00:14:51.047 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:51.047 "listen_address": { 00:14:51.047 "trtype": "tcp", 00:14:51.047 "traddr": "", 00:14:51.047 "trsvcid": "4421" 00:14:51.047 }, 00:14:51.047 "method": "nvmf_subsystem_remove_listener", 00:14:51.047 "req_id": 1 00:14:51.047 } 00:14:51.047 Got JSON-RPC error response 00:14:51.047 response: 00:14:51.047 { 00:14:51.047 "code": -32602, 00:14:51.047 "message": "Invalid parameters" 00:14:51.047 }' 00:14:51.047 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:51.047 { 00:14:51.047 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:51.047 "listen_address": { 00:14:51.047 "trtype": "tcp", 00:14:51.047 "traddr": "", 00:14:51.047 "trsvcid": "4421" 00:14:51.047 }, 00:14:51.047 "method": "nvmf_subsystem_remove_listener", 00:14:51.047 "req_id": 1 00:14:51.047 } 00:14:51.047 Got JSON-RPC error response 00:14:51.047 response: 00:14:51.047 { 00:14:51.047 "code": -32602, 00:14:51.047 "message": "Invalid parameters" 00:14:51.047 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:51.047 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23761 -i 0 00:14:51.047 [2024-12-10 05:40:08.915122] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23761: invalid cntlid range [0-65519] 00:14:51.047 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:51.047 { 00:14:51.047 "nqn": "nqn.2016-06.io.spdk:cnode23761", 00:14:51.047 "min_cntlid": 0, 00:14:51.047 "method": "nvmf_create_subsystem", 00:14:51.047 "req_id": 1 00:14:51.047 } 00:14:51.047 Got JSON-RPC error response 00:14:51.047 response: 00:14:51.047 { 00:14:51.047 "code": -32602, 00:14:51.047 "message": "Invalid cntlid range [0-65519]" 00:14:51.047 }' 00:14:51.047 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:51.047 { 00:14:51.047 "nqn": "nqn.2016-06.io.spdk:cnode23761", 00:14:51.047 "min_cntlid": 0, 00:14:51.047 "method": "nvmf_create_subsystem", 00:14:51.047 "req_id": 1 00:14:51.047 } 00:14:51.047 Got JSON-RPC error response 00:14:51.047 response: 00:14:51.047 { 00:14:51.047 "code": -32602, 00:14:51.047 "message": "Invalid cntlid range [0-65519]" 00:14:51.047 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:51.047 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20012 -i 65520 00:14:51.304 [2024-12-10 05:40:09.115828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20012: invalid cntlid range [65520-65519] 00:14:51.304 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:51.304 { 00:14:51.304 "nqn": "nqn.2016-06.io.spdk:cnode20012", 00:14:51.304 "min_cntlid": 65520, 00:14:51.304 "method": "nvmf_create_subsystem", 00:14:51.304 "req_id": 1 00:14:51.304 } 00:14:51.304 Got JSON-RPC error response 00:14:51.304 response: 00:14:51.304 { 00:14:51.304 "code": -32602, 00:14:51.304 "message": "Invalid cntlid range [65520-65519]" 00:14:51.304 }' 00:14:51.304 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:51.304 { 00:14:51.304 "nqn": "nqn.2016-06.io.spdk:cnode20012", 00:14:51.304 "min_cntlid": 65520, 00:14:51.304 "method": "nvmf_create_subsystem", 00:14:51.304 "req_id": 1 00:14:51.304 } 00:14:51.304 Got JSON-RPC error response 00:14:51.304 response: 00:14:51.304 { 00:14:51.304 "code": -32602, 00:14:51.304 "message": "Invalid cntlid range [65520-65519]" 00:14:51.304 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:51.304 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11144 -I 0 00:14:51.561 [2024-12-10 05:40:09.316489] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11144: invalid cntlid range [1-0] 00:14:51.561 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:51.561 { 00:14:51.561 "nqn": "nqn.2016-06.io.spdk:cnode11144", 00:14:51.561 "max_cntlid": 0, 00:14:51.561 "method": "nvmf_create_subsystem", 00:14:51.561 "req_id": 1 00:14:51.561 } 00:14:51.561 Got JSON-RPC error response 00:14:51.561 response: 00:14:51.561 { 00:14:51.561 "code": -32602, 00:14:51.561 "message": "Invalid cntlid range [1-0]" 00:14:51.561 }' 00:14:51.561 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:51.561 { 00:14:51.561 "nqn": "nqn.2016-06.io.spdk:cnode11144", 00:14:51.561 "max_cntlid": 0, 00:14:51.561 "method": "nvmf_create_subsystem", 00:14:51.561 "req_id": 1 00:14:51.561 } 00:14:51.561 Got JSON-RPC error response 00:14:51.561 response: 00:14:51.561 { 00:14:51.561 "code": -32602, 00:14:51.561 "message": "Invalid cntlid range [1-0]" 00:14:51.561 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:51.561 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31541 -I 65520 00:14:51.561 [2024-12-10 05:40:09.513118] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31541: invalid cntlid range [1-65520] 00:14:51.819 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:51.819 { 00:14:51.819 "nqn": "nqn.2016-06.io.spdk:cnode31541", 00:14:51.819 "max_cntlid": 65520, 00:14:51.819 "method": "nvmf_create_subsystem", 00:14:51.819 "req_id": 1 00:14:51.819 } 00:14:51.819 Got JSON-RPC error response 00:14:51.819 response: 00:14:51.819 { 00:14:51.819 "code": -32602, 00:14:51.819 "message": "Invalid cntlid range [1-65520]" 00:14:51.819 }' 00:14:51.819 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:51.819 { 00:14:51.819 "nqn": "nqn.2016-06.io.spdk:cnode31541", 00:14:51.819 "max_cntlid": 65520, 00:14:51.819 "method": "nvmf_create_subsystem", 00:14:51.819 "req_id": 1 00:14:51.819 } 00:14:51.819 Got JSON-RPC error response 00:14:51.819 response: 00:14:51.819 { 00:14:51.819 "code": -32602, 00:14:51.819 "message": "Invalid cntlid range [1-65520]" 00:14:51.819 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:51.819 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12887 -i 6 -I 5 00:14:51.819 [2024-12-10 05:40:09.705780] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12887: invalid cntlid range [6-5] 00:14:51.819 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:51.819 { 00:14:51.819 "nqn": "nqn.2016-06.io.spdk:cnode12887", 00:14:51.819 "min_cntlid": 6, 00:14:51.819 "max_cntlid": 5, 00:14:51.819 "method": "nvmf_create_subsystem", 00:14:51.819 "req_id": 1 00:14:51.819 } 00:14:51.819 Got JSON-RPC error response 00:14:51.819 response: 00:14:51.819 { 00:14:51.819 "code": -32602, 00:14:51.819 "message": "Invalid cntlid range [6-5]" 00:14:51.819 }' 00:14:51.819 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:51.819 { 00:14:51.819 "nqn": "nqn.2016-06.io.spdk:cnode12887", 00:14:51.819 "min_cntlid": 6, 00:14:51.819 "max_cntlid": 5, 00:14:51.819 "method": "nvmf_create_subsystem", 00:14:51.819 "req_id": 1 00:14:51.819 } 00:14:51.819 Got JSON-RPC error response 00:14:51.819 response: 00:14:51.819 { 00:14:51.819 "code": -32602, 00:14:51.819 "message": "Invalid cntlid range [6-5]" 00:14:51.819 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:51.819 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:52.078 { 00:14:52.078 "name": "foobar", 00:14:52.078 "method": "nvmf_delete_target", 00:14:52.078 "req_id": 1 00:14:52.078 } 00:14:52.078 Got JSON-RPC error response 00:14:52.078 response: 00:14:52.078 { 00:14:52.078 "code": -32602, 00:14:52.078 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:52.078 }' 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:52.078 { 00:14:52.078 "name": "foobar", 00:14:52.078 "method": "nvmf_delete_target", 00:14:52.078 "req_id": 1 00:14:52.078 } 00:14:52.078 Got JSON-RPC error response 00:14:52.078 response: 00:14:52.078 { 00:14:52.078 "code": -32602, 00:14:52.078 "message": "The specified target doesn't exist, cannot delete it." 00:14:52.078 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.078 rmmod nvme_tcp 00:14:52.078 rmmod nvme_fabrics 00:14:52.078 rmmod nvme_keyring 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 73544 ']' 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 73544 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 73544 ']' 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 73544 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73544 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73544' 00:14:52.078 killing process with pid 73544 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 73544 00:14:52.078 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 73544 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.337 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:54.872 00:14:54.872 real 0m13.471s 00:14:54.872 user 0m21.666s 00:14:54.872 sys 0m5.948s 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:54.872 ************************************ 00:14:54.872 END TEST nvmf_invalid 00:14:54.872 ************************************ 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.872 ************************************ 00:14:54.872 START TEST nvmf_connect_stress 00:14:54.872 ************************************ 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:54.872 * Looking for test storage... 00:14:54.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.872 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.873 --rc genhtml_branch_coverage=1 00:14:54.873 --rc genhtml_function_coverage=1 00:14:54.873 --rc genhtml_legend=1 00:14:54.873 --rc geninfo_all_blocks=1 00:14:54.873 --rc geninfo_unexecuted_blocks=1 00:14:54.873 00:14:54.873 ' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.873 --rc genhtml_branch_coverage=1 00:14:54.873 --rc genhtml_function_coverage=1 00:14:54.873 --rc genhtml_legend=1 00:14:54.873 --rc geninfo_all_blocks=1 00:14:54.873 --rc geninfo_unexecuted_blocks=1 00:14:54.873 00:14:54.873 ' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.873 --rc genhtml_branch_coverage=1 00:14:54.873 --rc genhtml_function_coverage=1 00:14:54.873 --rc genhtml_legend=1 00:14:54.873 --rc geninfo_all_blocks=1 00:14:54.873 --rc geninfo_unexecuted_blocks=1 00:14:54.873 00:14:54.873 ' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.873 --rc genhtml_branch_coverage=1 00:14:54.873 --rc genhtml_function_coverage=1 00:14:54.873 --rc genhtml_legend=1 00:14:54.873 --rc geninfo_all_blocks=1 00:14:54.873 --rc geninfo_unexecuted_blocks=1 00:14:54.873 00:14:54.873 ' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.873 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:01.441 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:01.441 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:01.441 Found net devices under 0000:af:00.0: cvl_0_0 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:01.441 Found net devices under 0000:af:00.1: cvl_0_1 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.441 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:01.442 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:01.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:15:01.442 00:15:01.442 --- 10.0.0.2 ping statistics --- 00:15:01.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.442 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:15:01.442 00:15:01.442 --- 10.0.0.1 ping statistics --- 00:15:01.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.442 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=78342 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 78342 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 78342 ']' 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.442 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 [2024-12-10 05:40:19.314443] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:15:01.442 [2024-12-10 05:40:19.314486] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.699 [2024-12-10 05:40:19.402207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:01.699 [2024-12-10 05:40:19.443515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.699 [2024-12-10 05:40:19.443545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.699 [2024-12-10 05:40:19.443552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.699 [2024-12-10 05:40:19.443558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.699 [2024-12-10 05:40:19.443563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.699 [2024-12-10 05:40:19.444869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.699 [2024-12-10 05:40:19.444885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.699 [2024-12-10 05:40:19.444889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.262 [2024-12-10 05:40:20.202418] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.262 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.520 [2024-12-10 05:40:20.222634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.520 NULL1 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=78583 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.520 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.521 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.779 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.779 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:02.779 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.779 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.779 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.036 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.036 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:03.036 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.036 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.036 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.601 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.601 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:03.601 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.601 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.601 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.858 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.858 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:03.858 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.858 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.858 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.116 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.116 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:04.116 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.116 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.116 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.374 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:04.374 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.374 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.374 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.938 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.938 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:04.938 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.938 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.938 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.196 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.196 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:05.196 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.196 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.196 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.453 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.453 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:05.453 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.453 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.453 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.711 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.711 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:05.711 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.711 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.711 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.969 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.969 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:05.969 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.969 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.969 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.533 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.533 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:06.533 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.533 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.533 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.790 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.791 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:06.791 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.791 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.791 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.048 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.048 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:07.048 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.048 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.048 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.306 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.306 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:07.306 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.306 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.306 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.872 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.872 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:07.872 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.872 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.872 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.129 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.129 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:08.129 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.129 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.129 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.387 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.387 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:08.387 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.387 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.387 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.644 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:08.644 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.644 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.644 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.902 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.902 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:08.902 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.902 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.902 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.466 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.466 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:09.466 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.466 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.466 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.724 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.724 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:09.724 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.724 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.724 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.982 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.982 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:09.982 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.982 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.982 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.240 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.240 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:10.240 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.240 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.240 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.805 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.805 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:10.805 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.805 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.805 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.063 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.063 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:11.063 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.063 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.063 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.320 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.320 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:11.320 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.320 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.320 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.576 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.576 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:11.576 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.576 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.576 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.833 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.833 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:11.833 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.833 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.833 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.398 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.398 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:12.398 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.398 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.398 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.656 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.656 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:12.656 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.656 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.656 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.656 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 78583 00:15:12.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (78583) - No such process 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 78583 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.914 rmmod nvme_tcp 00:15:12.914 rmmod nvme_fabrics 00:15:12.914 rmmod nvme_keyring 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 78342 ']' 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 78342 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 78342 ']' 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 78342 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78342 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78342' 00:15:12.914 killing process with pid 78342 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 78342 00:15:12.914 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 78342 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.173 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.708 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:15.708 00:15:15.708 real 0m20.799s 00:15:15.708 user 0m42.630s 00:15:15.708 sys 0m9.250s 00:15:15.708 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.708 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.709 ************************************ 00:15:15.709 END TEST nvmf_connect_stress 00:15:15.709 ************************************ 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:15.709 ************************************ 00:15:15.709 START TEST nvmf_fused_ordering 00:15:15.709 ************************************ 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:15.709 * Looking for test storage... 00:15:15.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.709 --rc genhtml_branch_coverage=1 00:15:15.709 --rc genhtml_function_coverage=1 00:15:15.709 --rc genhtml_legend=1 00:15:15.709 --rc geninfo_all_blocks=1 00:15:15.709 --rc geninfo_unexecuted_blocks=1 00:15:15.709 00:15:15.709 ' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.709 --rc genhtml_branch_coverage=1 00:15:15.709 --rc genhtml_function_coverage=1 00:15:15.709 --rc genhtml_legend=1 00:15:15.709 --rc geninfo_all_blocks=1 00:15:15.709 --rc geninfo_unexecuted_blocks=1 00:15:15.709 00:15:15.709 ' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.709 --rc genhtml_branch_coverage=1 00:15:15.709 --rc genhtml_function_coverage=1 00:15:15.709 --rc genhtml_legend=1 00:15:15.709 --rc geninfo_all_blocks=1 00:15:15.709 --rc geninfo_unexecuted_blocks=1 00:15:15.709 00:15:15.709 ' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.709 --rc genhtml_branch_coverage=1 00:15:15.709 --rc genhtml_function_coverage=1 00:15:15.709 --rc genhtml_legend=1 00:15:15.709 --rc geninfo_all_blocks=1 00:15:15.709 --rc geninfo_unexecuted_blocks=1 00:15:15.709 00:15:15.709 ' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.709 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:15.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:15.710 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:22.278 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:22.278 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:22.278 Found net devices under 0000:af:00.0: cvl_0_0 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:22.278 Found net devices under 0000:af:00.1: cvl_0_1 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.278 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:22.279 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:22.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:15:22.279 00:15:22.279 --- 10.0.0.2 ping statistics --- 00:15:22.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.279 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:15:22.279 00:15:22.279 --- 10.0.0.1 ping statistics --- 00:15:22.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.279 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=84214 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 84214 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 84214 ']' 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.279 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.279 [2024-12-10 05:40:40.170237] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:15:22.279 [2024-12-10 05:40:40.170281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.537 [2024-12-10 05:40:40.255119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.537 [2024-12-10 05:40:40.293448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.537 [2024-12-10 05:40:40.293487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.537 [2024-12-10 05:40:40.293494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.537 [2024-12-10 05:40:40.293501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.537 [2024-12-10 05:40:40.293507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.537 [2024-12-10 05:40:40.294042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 [2024-12-10 05:40:40.442405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 [2024-12-10 05:40:40.462599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 NULL1 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.537 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.794 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.794 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:22.794 [2024-12-10 05:40:40.521186] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:15:22.794 [2024-12-10 05:40:40.521222] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84238 ] 00:15:23.066 Attached to nqn.2016-06.io.spdk:cnode1 00:15:23.066 Namespace ID: 1 size: 1GB 00:15:23.066 fused_ordering(0) 00:15:23.066 fused_ordering(1) 00:15:23.066 fused_ordering(2) 00:15:23.066 fused_ordering(3) 00:15:23.066 fused_ordering(4) 00:15:23.066 fused_ordering(5) 00:15:23.066 fused_ordering(6) 00:15:23.066 fused_ordering(7) 00:15:23.066 fused_ordering(8) 00:15:23.066 fused_ordering(9) 00:15:23.066 fused_ordering(10) 00:15:23.066 fused_ordering(11) 00:15:23.066 fused_ordering(12) 00:15:23.066 fused_ordering(13) 00:15:23.066 fused_ordering(14) 00:15:23.066 fused_ordering(15) 00:15:23.066 fused_ordering(16) 00:15:23.066 fused_ordering(17) 00:15:23.066 fused_ordering(18) 00:15:23.066 fused_ordering(19) 00:15:23.066 fused_ordering(20) 00:15:23.066 fused_ordering(21) 00:15:23.066 fused_ordering(22) 00:15:23.066 fused_ordering(23) 00:15:23.066 fused_ordering(24) 00:15:23.066 fused_ordering(25) 00:15:23.066 fused_ordering(26) 00:15:23.066 fused_ordering(27) 00:15:23.066 fused_ordering(28) 00:15:23.066 fused_ordering(29) 00:15:23.066 fused_ordering(30) 00:15:23.066 fused_ordering(31) 00:15:23.066 fused_ordering(32) 00:15:23.066 fused_ordering(33) 00:15:23.066 fused_ordering(34) 00:15:23.066 fused_ordering(35) 00:15:23.066 fused_ordering(36) 00:15:23.066 fused_ordering(37) 00:15:23.066 fused_ordering(38) 00:15:23.066 fused_ordering(39) 00:15:23.066 fused_ordering(40) 00:15:23.066 fused_ordering(41) 00:15:23.066 fused_ordering(42) 00:15:23.066 fused_ordering(43) 00:15:23.066 fused_ordering(44) 00:15:23.066 fused_ordering(45) 00:15:23.066 fused_ordering(46) 00:15:23.066 fused_ordering(47) 00:15:23.066 fused_ordering(48) 00:15:23.066 fused_ordering(49) 00:15:23.066 fused_ordering(50) 00:15:23.066 fused_ordering(51) 00:15:23.066 fused_ordering(52) 00:15:23.066 fused_ordering(53) 00:15:23.066 fused_ordering(54) 00:15:23.066 fused_ordering(55) 00:15:23.066 fused_ordering(56) 00:15:23.066 fused_ordering(57) 00:15:23.066 fused_ordering(58) 00:15:23.066 fused_ordering(59) 00:15:23.066 fused_ordering(60) 00:15:23.066 fused_ordering(61) 00:15:23.066 fused_ordering(62) 00:15:23.066 fused_ordering(63) 00:15:23.066 fused_ordering(64) 00:15:23.066 fused_ordering(65) 00:15:23.066 fused_ordering(66) 00:15:23.066 fused_ordering(67) 00:15:23.066 fused_ordering(68) 00:15:23.066 fused_ordering(69) 00:15:23.066 fused_ordering(70) 00:15:23.066 fused_ordering(71) 00:15:23.066 fused_ordering(72) 00:15:23.066 fused_ordering(73) 00:15:23.066 fused_ordering(74) 00:15:23.066 fused_ordering(75) 00:15:23.066 fused_ordering(76) 00:15:23.066 fused_ordering(77) 00:15:23.066 fused_ordering(78) 00:15:23.066 fused_ordering(79) 00:15:23.066 fused_ordering(80) 00:15:23.066 fused_ordering(81) 00:15:23.066 fused_ordering(82) 00:15:23.066 fused_ordering(83) 00:15:23.066 fused_ordering(84) 00:15:23.066 fused_ordering(85) 00:15:23.066 fused_ordering(86) 00:15:23.066 fused_ordering(87) 00:15:23.066 fused_ordering(88) 00:15:23.066 fused_ordering(89) 00:15:23.066 fused_ordering(90) 00:15:23.066 fused_ordering(91) 00:15:23.066 fused_ordering(92) 00:15:23.066 fused_ordering(93) 00:15:23.066 fused_ordering(94) 00:15:23.066 fused_ordering(95) 00:15:23.066 fused_ordering(96) 00:15:23.066 fused_ordering(97) 00:15:23.066 fused_ordering(98) 00:15:23.066 fused_ordering(99) 00:15:23.066 fused_ordering(100) 00:15:23.066 fused_ordering(101) 00:15:23.066 fused_ordering(102) 00:15:23.066 fused_ordering(103) 00:15:23.066 fused_ordering(104) 00:15:23.066 fused_ordering(105) 00:15:23.066 fused_ordering(106) 00:15:23.066 fused_ordering(107) 00:15:23.067 fused_ordering(108) 00:15:23.067 fused_ordering(109) 00:15:23.067 fused_ordering(110) 00:15:23.067 fused_ordering(111) 00:15:23.067 fused_ordering(112) 00:15:23.067 fused_ordering(113) 00:15:23.067 fused_ordering(114) 00:15:23.067 fused_ordering(115) 00:15:23.067 fused_ordering(116) 00:15:23.067 fused_ordering(117) 00:15:23.067 fused_ordering(118) 00:15:23.067 fused_ordering(119) 00:15:23.067 fused_ordering(120) 00:15:23.067 fused_ordering(121) 00:15:23.067 fused_ordering(122) 00:15:23.067 fused_ordering(123) 00:15:23.067 fused_ordering(124) 00:15:23.067 fused_ordering(125) 00:15:23.067 fused_ordering(126) 00:15:23.067 fused_ordering(127) 00:15:23.067 fused_ordering(128) 00:15:23.067 fused_ordering(129) 00:15:23.067 fused_ordering(130) 00:15:23.067 fused_ordering(131) 00:15:23.067 fused_ordering(132) 00:15:23.067 fused_ordering(133) 00:15:23.067 fused_ordering(134) 00:15:23.067 fused_ordering(135) 00:15:23.067 fused_ordering(136) 00:15:23.067 fused_ordering(137) 00:15:23.067 fused_ordering(138) 00:15:23.067 fused_ordering(139) 00:15:23.067 fused_ordering(140) 00:15:23.067 fused_ordering(141) 00:15:23.067 fused_ordering(142) 00:15:23.067 fused_ordering(143) 00:15:23.067 fused_ordering(144) 00:15:23.067 fused_ordering(145) 00:15:23.067 fused_ordering(146) 00:15:23.067 fused_ordering(147) 00:15:23.067 fused_ordering(148) 00:15:23.067 fused_ordering(149) 00:15:23.067 fused_ordering(150) 00:15:23.067 fused_ordering(151) 00:15:23.067 fused_ordering(152) 00:15:23.067 fused_ordering(153) 00:15:23.067 fused_ordering(154) 00:15:23.067 fused_ordering(155) 00:15:23.067 fused_ordering(156) 00:15:23.067 fused_ordering(157) 00:15:23.067 fused_ordering(158) 00:15:23.067 fused_ordering(159) 00:15:23.067 fused_ordering(160) 00:15:23.067 fused_ordering(161) 00:15:23.067 fused_ordering(162) 00:15:23.067 fused_ordering(163) 00:15:23.067 fused_ordering(164) 00:15:23.067 fused_ordering(165) 00:15:23.067 fused_ordering(166) 00:15:23.067 fused_ordering(167) 00:15:23.067 fused_ordering(168) 00:15:23.067 fused_ordering(169) 00:15:23.067 fused_ordering(170) 00:15:23.067 fused_ordering(171) 00:15:23.067 fused_ordering(172) 00:15:23.067 fused_ordering(173) 00:15:23.067 fused_ordering(174) 00:15:23.067 fused_ordering(175) 00:15:23.067 fused_ordering(176) 00:15:23.067 fused_ordering(177) 00:15:23.067 fused_ordering(178) 00:15:23.067 fused_ordering(179) 00:15:23.067 fused_ordering(180) 00:15:23.067 fused_ordering(181) 00:15:23.067 fused_ordering(182) 00:15:23.067 fused_ordering(183) 00:15:23.067 fused_ordering(184) 00:15:23.067 fused_ordering(185) 00:15:23.067 fused_ordering(186) 00:15:23.067 fused_ordering(187) 00:15:23.067 fused_ordering(188) 00:15:23.067 fused_ordering(189) 00:15:23.067 fused_ordering(190) 00:15:23.067 fused_ordering(191) 00:15:23.067 fused_ordering(192) 00:15:23.067 fused_ordering(193) 00:15:23.067 fused_ordering(194) 00:15:23.067 fused_ordering(195) 00:15:23.067 fused_ordering(196) 00:15:23.067 fused_ordering(197) 00:15:23.067 fused_ordering(198) 00:15:23.067 fused_ordering(199) 00:15:23.067 fused_ordering(200) 00:15:23.067 fused_ordering(201) 00:15:23.067 fused_ordering(202) 00:15:23.067 fused_ordering(203) 00:15:23.067 fused_ordering(204) 00:15:23.067 fused_ordering(205) 00:15:23.382 fused_ordering(206) 00:15:23.382 fused_ordering(207) 00:15:23.382 fused_ordering(208) 00:15:23.382 fused_ordering(209) 00:15:23.382 fused_ordering(210) 00:15:23.382 fused_ordering(211) 00:15:23.382 fused_ordering(212) 00:15:23.382 fused_ordering(213) 00:15:23.382 fused_ordering(214) 00:15:23.382 fused_ordering(215) 00:15:23.382 fused_ordering(216) 00:15:23.382 fused_ordering(217) 00:15:23.382 fused_ordering(218) 00:15:23.382 fused_ordering(219) 00:15:23.382 fused_ordering(220) 00:15:23.382 fused_ordering(221) 00:15:23.382 fused_ordering(222) 00:15:23.382 fused_ordering(223) 00:15:23.382 fused_ordering(224) 00:15:23.382 fused_ordering(225) 00:15:23.382 fused_ordering(226) 00:15:23.382 fused_ordering(227) 00:15:23.382 fused_ordering(228) 00:15:23.382 fused_ordering(229) 00:15:23.382 fused_ordering(230) 00:15:23.382 fused_ordering(231) 00:15:23.382 fused_ordering(232) 00:15:23.382 fused_ordering(233) 00:15:23.382 fused_ordering(234) 00:15:23.382 fused_ordering(235) 00:15:23.382 fused_ordering(236) 00:15:23.382 fused_ordering(237) 00:15:23.382 fused_ordering(238) 00:15:23.382 fused_ordering(239) 00:15:23.382 fused_ordering(240) 00:15:23.382 fused_ordering(241) 00:15:23.382 fused_ordering(242) 00:15:23.382 fused_ordering(243) 00:15:23.382 fused_ordering(244) 00:15:23.382 fused_ordering(245) 00:15:23.382 fused_ordering(246) 00:15:23.382 fused_ordering(247) 00:15:23.382 fused_ordering(248) 00:15:23.382 fused_ordering(249) 00:15:23.382 fused_ordering(250) 00:15:23.382 fused_ordering(251) 00:15:23.382 fused_ordering(252) 00:15:23.382 fused_ordering(253) 00:15:23.382 fused_ordering(254) 00:15:23.382 fused_ordering(255) 00:15:23.382 fused_ordering(256) 00:15:23.382 fused_ordering(257) 00:15:23.382 fused_ordering(258) 00:15:23.382 fused_ordering(259) 00:15:23.382 fused_ordering(260) 00:15:23.382 fused_ordering(261) 00:15:23.382 fused_ordering(262) 00:15:23.382 fused_ordering(263) 00:15:23.382 fused_ordering(264) 00:15:23.382 fused_ordering(265) 00:15:23.382 fused_ordering(266) 00:15:23.382 fused_ordering(267) 00:15:23.382 fused_ordering(268) 00:15:23.382 fused_ordering(269) 00:15:23.382 fused_ordering(270) 00:15:23.382 fused_ordering(271) 00:15:23.382 fused_ordering(272) 00:15:23.382 fused_ordering(273) 00:15:23.382 fused_ordering(274) 00:15:23.382 fused_ordering(275) 00:15:23.382 fused_ordering(276) 00:15:23.382 fused_ordering(277) 00:15:23.382 fused_ordering(278) 00:15:23.382 fused_ordering(279) 00:15:23.382 fused_ordering(280) 00:15:23.382 fused_ordering(281) 00:15:23.382 fused_ordering(282) 00:15:23.382 fused_ordering(283) 00:15:23.382 fused_ordering(284) 00:15:23.382 fused_ordering(285) 00:15:23.382 fused_ordering(286) 00:15:23.382 fused_ordering(287) 00:15:23.382 fused_ordering(288) 00:15:23.382 fused_ordering(289) 00:15:23.382 fused_ordering(290) 00:15:23.382 fused_ordering(291) 00:15:23.382 fused_ordering(292) 00:15:23.382 fused_ordering(293) 00:15:23.382 fused_ordering(294) 00:15:23.382 fused_ordering(295) 00:15:23.382 fused_ordering(296) 00:15:23.382 fused_ordering(297) 00:15:23.382 fused_ordering(298) 00:15:23.382 fused_ordering(299) 00:15:23.382 fused_ordering(300) 00:15:23.382 fused_ordering(301) 00:15:23.382 fused_ordering(302) 00:15:23.382 fused_ordering(303) 00:15:23.382 fused_ordering(304) 00:15:23.382 fused_ordering(305) 00:15:23.382 fused_ordering(306) 00:15:23.382 fused_ordering(307) 00:15:23.382 fused_ordering(308) 00:15:23.382 fused_ordering(309) 00:15:23.382 fused_ordering(310) 00:15:23.382 fused_ordering(311) 00:15:23.382 fused_ordering(312) 00:15:23.382 fused_ordering(313) 00:15:23.382 fused_ordering(314) 00:15:23.382 fused_ordering(315) 00:15:23.382 fused_ordering(316) 00:15:23.382 fused_ordering(317) 00:15:23.382 fused_ordering(318) 00:15:23.382 fused_ordering(319) 00:15:23.382 fused_ordering(320) 00:15:23.382 fused_ordering(321) 00:15:23.382 fused_ordering(322) 00:15:23.382 fused_ordering(323) 00:15:23.382 fused_ordering(324) 00:15:23.382 fused_ordering(325) 00:15:23.382 fused_ordering(326) 00:15:23.382 fused_ordering(327) 00:15:23.382 fused_ordering(328) 00:15:23.382 fused_ordering(329) 00:15:23.382 fused_ordering(330) 00:15:23.382 fused_ordering(331) 00:15:23.382 fused_ordering(332) 00:15:23.382 fused_ordering(333) 00:15:23.382 fused_ordering(334) 00:15:23.382 fused_ordering(335) 00:15:23.382 fused_ordering(336) 00:15:23.382 fused_ordering(337) 00:15:23.382 fused_ordering(338) 00:15:23.382 fused_ordering(339) 00:15:23.382 fused_ordering(340) 00:15:23.382 fused_ordering(341) 00:15:23.382 fused_ordering(342) 00:15:23.382 fused_ordering(343) 00:15:23.382 fused_ordering(344) 00:15:23.382 fused_ordering(345) 00:15:23.382 fused_ordering(346) 00:15:23.382 fused_ordering(347) 00:15:23.382 fused_ordering(348) 00:15:23.382 fused_ordering(349) 00:15:23.382 fused_ordering(350) 00:15:23.382 fused_ordering(351) 00:15:23.382 fused_ordering(352) 00:15:23.382 fused_ordering(353) 00:15:23.382 fused_ordering(354) 00:15:23.382 fused_ordering(355) 00:15:23.382 fused_ordering(356) 00:15:23.382 fused_ordering(357) 00:15:23.382 fused_ordering(358) 00:15:23.382 fused_ordering(359) 00:15:23.382 fused_ordering(360) 00:15:23.382 fused_ordering(361) 00:15:23.382 fused_ordering(362) 00:15:23.382 fused_ordering(363) 00:15:23.382 fused_ordering(364) 00:15:23.382 fused_ordering(365) 00:15:23.382 fused_ordering(366) 00:15:23.382 fused_ordering(367) 00:15:23.382 fused_ordering(368) 00:15:23.382 fused_ordering(369) 00:15:23.382 fused_ordering(370) 00:15:23.382 fused_ordering(371) 00:15:23.382 fused_ordering(372) 00:15:23.382 fused_ordering(373) 00:15:23.382 fused_ordering(374) 00:15:23.382 fused_ordering(375) 00:15:23.382 fused_ordering(376) 00:15:23.382 fused_ordering(377) 00:15:23.382 fused_ordering(378) 00:15:23.382 fused_ordering(379) 00:15:23.382 fused_ordering(380) 00:15:23.382 fused_ordering(381) 00:15:23.382 fused_ordering(382) 00:15:23.383 fused_ordering(383) 00:15:23.383 fused_ordering(384) 00:15:23.383 fused_ordering(385) 00:15:23.383 fused_ordering(386) 00:15:23.383 fused_ordering(387) 00:15:23.383 fused_ordering(388) 00:15:23.383 fused_ordering(389) 00:15:23.383 fused_ordering(390) 00:15:23.383 fused_ordering(391) 00:15:23.383 fused_ordering(392) 00:15:23.383 fused_ordering(393) 00:15:23.383 fused_ordering(394) 00:15:23.383 fused_ordering(395) 00:15:23.383 fused_ordering(396) 00:15:23.383 fused_ordering(397) 00:15:23.383 fused_ordering(398) 00:15:23.383 fused_ordering(399) 00:15:23.383 fused_ordering(400) 00:15:23.383 fused_ordering(401) 00:15:23.383 fused_ordering(402) 00:15:23.383 fused_ordering(403) 00:15:23.383 fused_ordering(404) 00:15:23.383 fused_ordering(405) 00:15:23.383 fused_ordering(406) 00:15:23.383 fused_ordering(407) 00:15:23.383 fused_ordering(408) 00:15:23.383 fused_ordering(409) 00:15:23.383 fused_ordering(410) 00:15:23.669 fused_ordering(411) 00:15:23.669 fused_ordering(412) 00:15:23.669 fused_ordering(413) 00:15:23.669 fused_ordering(414) 00:15:23.669 fused_ordering(415) 00:15:23.669 fused_ordering(416) 00:15:23.669 fused_ordering(417) 00:15:23.669 fused_ordering(418) 00:15:23.669 fused_ordering(419) 00:15:23.669 fused_ordering(420) 00:15:23.669 fused_ordering(421) 00:15:23.669 fused_ordering(422) 00:15:23.669 fused_ordering(423) 00:15:23.669 fused_ordering(424) 00:15:23.669 fused_ordering(425) 00:15:23.669 fused_ordering(426) 00:15:23.669 fused_ordering(427) 00:15:23.669 fused_ordering(428) 00:15:23.669 fused_ordering(429) 00:15:23.669 fused_ordering(430) 00:15:23.669 fused_ordering(431) 00:15:23.669 fused_ordering(432) 00:15:23.669 fused_ordering(433) 00:15:23.669 fused_ordering(434) 00:15:23.669 fused_ordering(435) 00:15:23.669 fused_ordering(436) 00:15:23.669 fused_ordering(437) 00:15:23.669 fused_ordering(438) 00:15:23.669 fused_ordering(439) 00:15:23.669 fused_ordering(440) 00:15:23.669 fused_ordering(441) 00:15:23.669 fused_ordering(442) 00:15:23.669 fused_ordering(443) 00:15:23.669 fused_ordering(444) 00:15:23.669 fused_ordering(445) 00:15:23.669 fused_ordering(446) 00:15:23.669 fused_ordering(447) 00:15:23.669 fused_ordering(448) 00:15:23.669 fused_ordering(449) 00:15:23.669 fused_ordering(450) 00:15:23.669 fused_ordering(451) 00:15:23.669 fused_ordering(452) 00:15:23.669 fused_ordering(453) 00:15:23.669 fused_ordering(454) 00:15:23.669 fused_ordering(455) 00:15:23.669 fused_ordering(456) 00:15:23.669 fused_ordering(457) 00:15:23.669 fused_ordering(458) 00:15:23.669 fused_ordering(459) 00:15:23.669 fused_ordering(460) 00:15:23.669 fused_ordering(461) 00:15:23.669 fused_ordering(462) 00:15:23.669 fused_ordering(463) 00:15:23.669 fused_ordering(464) 00:15:23.669 fused_ordering(465) 00:15:23.669 fused_ordering(466) 00:15:23.669 fused_ordering(467) 00:15:23.669 fused_ordering(468) 00:15:23.669 fused_ordering(469) 00:15:23.669 fused_ordering(470) 00:15:23.669 fused_ordering(471) 00:15:23.669 fused_ordering(472) 00:15:23.669 fused_ordering(473) 00:15:23.669 fused_ordering(474) 00:15:23.669 fused_ordering(475) 00:15:23.669 fused_ordering(476) 00:15:23.669 fused_ordering(477) 00:15:23.669 fused_ordering(478) 00:15:23.669 fused_ordering(479) 00:15:23.669 fused_ordering(480) 00:15:23.669 fused_ordering(481) 00:15:23.669 fused_ordering(482) 00:15:23.669 fused_ordering(483) 00:15:23.669 fused_ordering(484) 00:15:23.669 fused_ordering(485) 00:15:23.669 fused_ordering(486) 00:15:23.669 fused_ordering(487) 00:15:23.669 fused_ordering(488) 00:15:23.669 fused_ordering(489) 00:15:23.669 fused_ordering(490) 00:15:23.669 fused_ordering(491) 00:15:23.669 fused_ordering(492) 00:15:23.669 fused_ordering(493) 00:15:23.669 fused_ordering(494) 00:15:23.669 fused_ordering(495) 00:15:23.669 fused_ordering(496) 00:15:23.669 fused_ordering(497) 00:15:23.669 fused_ordering(498) 00:15:23.669 fused_ordering(499) 00:15:23.669 fused_ordering(500) 00:15:23.669 fused_ordering(501) 00:15:23.669 fused_ordering(502) 00:15:23.669 fused_ordering(503) 00:15:23.669 fused_ordering(504) 00:15:23.669 fused_ordering(505) 00:15:23.669 fused_ordering(506) 00:15:23.669 fused_ordering(507) 00:15:23.669 fused_ordering(508) 00:15:23.669 fused_ordering(509) 00:15:23.669 fused_ordering(510) 00:15:23.669 fused_ordering(511) 00:15:23.669 fused_ordering(512) 00:15:23.669 fused_ordering(513) 00:15:23.669 fused_ordering(514) 00:15:23.669 fused_ordering(515) 00:15:23.669 fused_ordering(516) 00:15:23.669 fused_ordering(517) 00:15:23.669 fused_ordering(518) 00:15:23.669 fused_ordering(519) 00:15:23.669 fused_ordering(520) 00:15:23.669 fused_ordering(521) 00:15:23.669 fused_ordering(522) 00:15:23.669 fused_ordering(523) 00:15:23.669 fused_ordering(524) 00:15:23.669 fused_ordering(525) 00:15:23.669 fused_ordering(526) 00:15:23.669 fused_ordering(527) 00:15:23.669 fused_ordering(528) 00:15:23.669 fused_ordering(529) 00:15:23.669 fused_ordering(530) 00:15:23.669 fused_ordering(531) 00:15:23.669 fused_ordering(532) 00:15:23.669 fused_ordering(533) 00:15:23.669 fused_ordering(534) 00:15:23.669 fused_ordering(535) 00:15:23.669 fused_ordering(536) 00:15:23.669 fused_ordering(537) 00:15:23.669 fused_ordering(538) 00:15:23.669 fused_ordering(539) 00:15:23.669 fused_ordering(540) 00:15:23.669 fused_ordering(541) 00:15:23.669 fused_ordering(542) 00:15:23.669 fused_ordering(543) 00:15:23.669 fused_ordering(544) 00:15:23.669 fused_ordering(545) 00:15:23.669 fused_ordering(546) 00:15:23.669 fused_ordering(547) 00:15:23.669 fused_ordering(548) 00:15:23.669 fused_ordering(549) 00:15:23.669 fused_ordering(550) 00:15:23.669 fused_ordering(551) 00:15:23.669 fused_ordering(552) 00:15:23.669 fused_ordering(553) 00:15:23.669 fused_ordering(554) 00:15:23.669 fused_ordering(555) 00:15:23.669 fused_ordering(556) 00:15:23.669 fused_ordering(557) 00:15:23.669 fused_ordering(558) 00:15:23.669 fused_ordering(559) 00:15:23.669 fused_ordering(560) 00:15:23.669 fused_ordering(561) 00:15:23.669 fused_ordering(562) 00:15:23.669 fused_ordering(563) 00:15:23.669 fused_ordering(564) 00:15:23.669 fused_ordering(565) 00:15:23.669 fused_ordering(566) 00:15:23.669 fused_ordering(567) 00:15:23.669 fused_ordering(568) 00:15:23.669 fused_ordering(569) 00:15:23.669 fused_ordering(570) 00:15:23.669 fused_ordering(571) 00:15:23.669 fused_ordering(572) 00:15:23.669 fused_ordering(573) 00:15:23.669 fused_ordering(574) 00:15:23.669 fused_ordering(575) 00:15:23.669 fused_ordering(576) 00:15:23.669 fused_ordering(577) 00:15:23.669 fused_ordering(578) 00:15:23.669 fused_ordering(579) 00:15:23.669 fused_ordering(580) 00:15:23.669 fused_ordering(581) 00:15:23.669 fused_ordering(582) 00:15:23.669 fused_ordering(583) 00:15:23.669 fused_ordering(584) 00:15:23.669 fused_ordering(585) 00:15:23.669 fused_ordering(586) 00:15:23.669 fused_ordering(587) 00:15:23.669 fused_ordering(588) 00:15:23.669 fused_ordering(589) 00:15:23.669 fused_ordering(590) 00:15:23.669 fused_ordering(591) 00:15:23.669 fused_ordering(592) 00:15:23.669 fused_ordering(593) 00:15:23.669 fused_ordering(594) 00:15:23.669 fused_ordering(595) 00:15:23.669 fused_ordering(596) 00:15:23.669 fused_ordering(597) 00:15:23.669 fused_ordering(598) 00:15:23.669 fused_ordering(599) 00:15:23.669 fused_ordering(600) 00:15:23.669 fused_ordering(601) 00:15:23.669 fused_ordering(602) 00:15:23.669 fused_ordering(603) 00:15:23.669 fused_ordering(604) 00:15:23.669 fused_ordering(605) 00:15:23.669 fused_ordering(606) 00:15:23.669 fused_ordering(607) 00:15:23.669 fused_ordering(608) 00:15:23.669 fused_ordering(609) 00:15:23.669 fused_ordering(610) 00:15:23.669 fused_ordering(611) 00:15:23.669 fused_ordering(612) 00:15:23.669 fused_ordering(613) 00:15:23.669 fused_ordering(614) 00:15:23.669 fused_ordering(615) 00:15:23.936 fused_ordering(616) 00:15:23.936 fused_ordering(617) 00:15:23.936 fused_ordering(618) 00:15:23.936 fused_ordering(619) 00:15:23.936 fused_ordering(620) 00:15:23.936 fused_ordering(621) 00:15:23.936 fused_ordering(622) 00:15:23.936 fused_ordering(623) 00:15:23.936 fused_ordering(624) 00:15:23.936 fused_ordering(625) 00:15:23.936 fused_ordering(626) 00:15:23.936 fused_ordering(627) 00:15:23.936 fused_ordering(628) 00:15:23.936 fused_ordering(629) 00:15:23.936 fused_ordering(630) 00:15:23.936 fused_ordering(631) 00:15:23.936 fused_ordering(632) 00:15:23.936 fused_ordering(633) 00:15:23.936 fused_ordering(634) 00:15:23.936 fused_ordering(635) 00:15:23.936 fused_ordering(636) 00:15:23.936 fused_ordering(637) 00:15:23.936 fused_ordering(638) 00:15:23.936 fused_ordering(639) 00:15:23.936 fused_ordering(640) 00:15:23.936 fused_ordering(641) 00:15:23.936 fused_ordering(642) 00:15:23.936 fused_ordering(643) 00:15:23.936 fused_ordering(644) 00:15:23.936 fused_ordering(645) 00:15:23.936 fused_ordering(646) 00:15:23.936 fused_ordering(647) 00:15:23.936 fused_ordering(648) 00:15:23.936 fused_ordering(649) 00:15:23.936 fused_ordering(650) 00:15:23.936 fused_ordering(651) 00:15:23.936 fused_ordering(652) 00:15:23.936 fused_ordering(653) 00:15:23.936 fused_ordering(654) 00:15:23.936 fused_ordering(655) 00:15:23.936 fused_ordering(656) 00:15:23.936 fused_ordering(657) 00:15:23.936 fused_ordering(658) 00:15:23.936 fused_ordering(659) 00:15:23.936 fused_ordering(660) 00:15:23.936 fused_ordering(661) 00:15:23.936 fused_ordering(662) 00:15:23.936 fused_ordering(663) 00:15:23.936 fused_ordering(664) 00:15:23.936 fused_ordering(665) 00:15:23.936 fused_ordering(666) 00:15:23.936 fused_ordering(667) 00:15:23.936 fused_ordering(668) 00:15:23.936 fused_ordering(669) 00:15:23.936 fused_ordering(670) 00:15:23.936 fused_ordering(671) 00:15:23.936 fused_ordering(672) 00:15:23.936 fused_ordering(673) 00:15:23.936 fused_ordering(674) 00:15:23.936 fused_ordering(675) 00:15:23.936 fused_ordering(676) 00:15:23.936 fused_ordering(677) 00:15:23.936 fused_ordering(678) 00:15:23.936 fused_ordering(679) 00:15:23.936 fused_ordering(680) 00:15:23.937 fused_ordering(681) 00:15:23.937 fused_ordering(682) 00:15:23.937 fused_ordering(683) 00:15:23.937 fused_ordering(684) 00:15:23.937 fused_ordering(685) 00:15:23.937 fused_ordering(686) 00:15:23.937 fused_ordering(687) 00:15:23.937 fused_ordering(688) 00:15:23.937 fused_ordering(689) 00:15:23.937 fused_ordering(690) 00:15:23.937 fused_ordering(691) 00:15:23.937 fused_ordering(692) 00:15:23.937 fused_ordering(693) 00:15:23.937 fused_ordering(694) 00:15:23.937 fused_ordering(695) 00:15:23.937 fused_ordering(696) 00:15:23.937 fused_ordering(697) 00:15:23.937 fused_ordering(698) 00:15:23.937 fused_ordering(699) 00:15:23.937 fused_ordering(700) 00:15:23.937 fused_ordering(701) 00:15:23.937 fused_ordering(702) 00:15:23.937 fused_ordering(703) 00:15:23.937 fused_ordering(704) 00:15:23.937 fused_ordering(705) 00:15:23.937 fused_ordering(706) 00:15:23.937 fused_ordering(707) 00:15:23.937 fused_ordering(708) 00:15:23.937 fused_ordering(709) 00:15:23.937 fused_ordering(710) 00:15:23.937 fused_ordering(711) 00:15:23.937 fused_ordering(712) 00:15:23.937 fused_ordering(713) 00:15:23.937 fused_ordering(714) 00:15:23.937 fused_ordering(715) 00:15:23.937 fused_ordering(716) 00:15:23.937 fused_ordering(717) 00:15:23.937 fused_ordering(718) 00:15:23.937 fused_ordering(719) 00:15:23.937 fused_ordering(720) 00:15:23.937 fused_ordering(721) 00:15:23.937 fused_ordering(722) 00:15:23.937 fused_ordering(723) 00:15:23.937 fused_ordering(724) 00:15:23.937 fused_ordering(725) 00:15:23.937 fused_ordering(726) 00:15:23.937 fused_ordering(727) 00:15:23.937 fused_ordering(728) 00:15:23.937 fused_ordering(729) 00:15:23.937 fused_ordering(730) 00:15:23.937 fused_ordering(731) 00:15:23.937 fused_ordering(732) 00:15:23.937 fused_ordering(733) 00:15:23.937 fused_ordering(734) 00:15:23.937 fused_ordering(735) 00:15:23.937 fused_ordering(736) 00:15:23.937 fused_ordering(737) 00:15:23.937 fused_ordering(738) 00:15:23.937 fused_ordering(739) 00:15:23.937 fused_ordering(740) 00:15:23.937 fused_ordering(741) 00:15:23.937 fused_ordering(742) 00:15:23.937 fused_ordering(743) 00:15:23.937 fused_ordering(744) 00:15:23.937 fused_ordering(745) 00:15:23.937 fused_ordering(746) 00:15:23.937 fused_ordering(747) 00:15:23.937 fused_ordering(748) 00:15:23.937 fused_ordering(749) 00:15:23.937 fused_ordering(750) 00:15:23.937 fused_ordering(751) 00:15:23.937 fused_ordering(752) 00:15:23.937 fused_ordering(753) 00:15:23.937 fused_ordering(754) 00:15:23.937 fused_ordering(755) 00:15:23.937 fused_ordering(756) 00:15:23.937 fused_ordering(757) 00:15:23.937 fused_ordering(758) 00:15:23.937 fused_ordering(759) 00:15:23.937 fused_ordering(760) 00:15:23.937 fused_ordering(761) 00:15:23.937 fused_ordering(762) 00:15:23.937 fused_ordering(763) 00:15:23.937 fused_ordering(764) 00:15:23.937 fused_ordering(765) 00:15:23.937 fused_ordering(766) 00:15:23.937 fused_ordering(767) 00:15:23.937 fused_ordering(768) 00:15:23.937 fused_ordering(769) 00:15:23.937 fused_ordering(770) 00:15:23.937 fused_ordering(771) 00:15:23.937 fused_ordering(772) 00:15:23.937 fused_ordering(773) 00:15:23.937 fused_ordering(774) 00:15:23.937 fused_ordering(775) 00:15:23.937 fused_ordering(776) 00:15:23.937 fused_ordering(777) 00:15:23.937 fused_ordering(778) 00:15:23.937 fused_ordering(779) 00:15:23.937 fused_ordering(780) 00:15:23.937 fused_ordering(781) 00:15:23.937 fused_ordering(782) 00:15:23.937 fused_ordering(783) 00:15:23.937 fused_ordering(784) 00:15:23.937 fused_ordering(785) 00:15:23.937 fused_ordering(786) 00:15:23.937 fused_ordering(787) 00:15:23.937 fused_ordering(788) 00:15:23.937 fused_ordering(789) 00:15:23.937 fused_ordering(790) 00:15:23.937 fused_ordering(791) 00:15:23.937 fused_ordering(792) 00:15:23.937 fused_ordering(793) 00:15:23.937 fused_ordering(794) 00:15:23.937 fused_ordering(795) 00:15:23.937 fused_ordering(796) 00:15:23.937 fused_ordering(797) 00:15:23.937 fused_ordering(798) 00:15:23.937 fused_ordering(799) 00:15:23.937 fused_ordering(800) 00:15:23.937 fused_ordering(801) 00:15:23.937 fused_ordering(802) 00:15:23.937 fused_ordering(803) 00:15:23.937 fused_ordering(804) 00:15:23.937 fused_ordering(805) 00:15:23.937 fused_ordering(806) 00:15:23.937 fused_ordering(807) 00:15:23.937 fused_ordering(808) 00:15:23.937 fused_ordering(809) 00:15:23.937 fused_ordering(810) 00:15:23.937 fused_ordering(811) 00:15:23.937 fused_ordering(812) 00:15:23.937 fused_ordering(813) 00:15:23.937 fused_ordering(814) 00:15:23.937 fused_ordering(815) 00:15:23.937 fused_ordering(816) 00:15:23.937 fused_ordering(817) 00:15:23.937 fused_ordering(818) 00:15:23.937 fused_ordering(819) 00:15:23.937 fused_ordering(820) 00:15:24.505 fused_ordering(821) 00:15:24.505 fused_ordering(822) 00:15:24.505 fused_ordering(823) 00:15:24.505 fused_ordering(824) 00:15:24.505 fused_ordering(825) 00:15:24.505 fused_ordering(826) 00:15:24.505 fused_ordering(827) 00:15:24.505 fused_ordering(828) 00:15:24.505 fused_ordering(829) 00:15:24.505 fused_ordering(830) 00:15:24.505 fused_ordering(831) 00:15:24.505 fused_ordering(832) 00:15:24.505 fused_ordering(833) 00:15:24.505 fused_ordering(834) 00:15:24.505 fused_ordering(835) 00:15:24.505 fused_ordering(836) 00:15:24.505 fused_ordering(837) 00:15:24.505 fused_ordering(838) 00:15:24.505 fused_ordering(839) 00:15:24.505 fused_ordering(840) 00:15:24.505 fused_ordering(841) 00:15:24.505 fused_ordering(842) 00:15:24.505 fused_ordering(843) 00:15:24.505 fused_ordering(844) 00:15:24.505 fused_ordering(845) 00:15:24.505 fused_ordering(846) 00:15:24.505 fused_ordering(847) 00:15:24.505 fused_ordering(848) 00:15:24.505 fused_ordering(849) 00:15:24.505 fused_ordering(850) 00:15:24.505 fused_ordering(851) 00:15:24.505 fused_ordering(852) 00:15:24.505 fused_ordering(853) 00:15:24.505 fused_ordering(854) 00:15:24.505 fused_ordering(855) 00:15:24.505 fused_ordering(856) 00:15:24.505 fused_ordering(857) 00:15:24.505 fused_ordering(858) 00:15:24.505 fused_ordering(859) 00:15:24.505 fused_ordering(860) 00:15:24.505 fused_ordering(861) 00:15:24.505 fused_ordering(862) 00:15:24.505 fused_ordering(863) 00:15:24.505 fused_ordering(864) 00:15:24.505 fused_ordering(865) 00:15:24.505 fused_ordering(866) 00:15:24.505 fused_ordering(867) 00:15:24.505 fused_ordering(868) 00:15:24.505 fused_ordering(869) 00:15:24.505 fused_ordering(870) 00:15:24.505 fused_ordering(871) 00:15:24.505 fused_ordering(872) 00:15:24.505 fused_ordering(873) 00:15:24.505 fused_ordering(874) 00:15:24.505 fused_ordering(875) 00:15:24.505 fused_ordering(876) 00:15:24.505 fused_ordering(877) 00:15:24.505 fused_ordering(878) 00:15:24.505 fused_ordering(879) 00:15:24.505 fused_ordering(880) 00:15:24.505 fused_ordering(881) 00:15:24.505 fused_ordering(882) 00:15:24.505 fused_ordering(883) 00:15:24.505 fused_ordering(884) 00:15:24.505 fused_ordering(885) 00:15:24.505 fused_ordering(886) 00:15:24.505 fused_ordering(887) 00:15:24.505 fused_ordering(888) 00:15:24.505 fused_ordering(889) 00:15:24.505 fused_ordering(890) 00:15:24.505 fused_ordering(891) 00:15:24.505 fused_ordering(892) 00:15:24.505 fused_ordering(893) 00:15:24.505 fused_ordering(894) 00:15:24.505 fused_ordering(895) 00:15:24.505 fused_ordering(896) 00:15:24.505 fused_ordering(897) 00:15:24.505 fused_ordering(898) 00:15:24.505 fused_ordering(899) 00:15:24.505 fused_ordering(900) 00:15:24.505 fused_ordering(901) 00:15:24.505 fused_ordering(902) 00:15:24.505 fused_ordering(903) 00:15:24.505 fused_ordering(904) 00:15:24.505 fused_ordering(905) 00:15:24.505 fused_ordering(906) 00:15:24.505 fused_ordering(907) 00:15:24.505 fused_ordering(908) 00:15:24.505 fused_ordering(909) 00:15:24.505 fused_ordering(910) 00:15:24.505 fused_ordering(911) 00:15:24.505 fused_ordering(912) 00:15:24.505 fused_ordering(913) 00:15:24.505 fused_ordering(914) 00:15:24.505 fused_ordering(915) 00:15:24.505 fused_ordering(916) 00:15:24.505 fused_ordering(917) 00:15:24.505 fused_ordering(918) 00:15:24.505 fused_ordering(919) 00:15:24.505 fused_ordering(920) 00:15:24.505 fused_ordering(921) 00:15:24.505 fused_ordering(922) 00:15:24.505 fused_ordering(923) 00:15:24.505 fused_ordering(924) 00:15:24.505 fused_ordering(925) 00:15:24.505 fused_ordering(926) 00:15:24.505 fused_ordering(927) 00:15:24.505 fused_ordering(928) 00:15:24.505 fused_ordering(929) 00:15:24.505 fused_ordering(930) 00:15:24.505 fused_ordering(931) 00:15:24.505 fused_ordering(932) 00:15:24.505 fused_ordering(933) 00:15:24.505 fused_ordering(934) 00:15:24.505 fused_ordering(935) 00:15:24.505 fused_ordering(936) 00:15:24.505 fused_ordering(937) 00:15:24.505 fused_ordering(938) 00:15:24.505 fused_ordering(939) 00:15:24.505 fused_ordering(940) 00:15:24.505 fused_ordering(941) 00:15:24.505 fused_ordering(942) 00:15:24.505 fused_ordering(943) 00:15:24.505 fused_ordering(944) 00:15:24.505 fused_ordering(945) 00:15:24.505 fused_ordering(946) 00:15:24.505 fused_ordering(947) 00:15:24.505 fused_ordering(948) 00:15:24.505 fused_ordering(949) 00:15:24.505 fused_ordering(950) 00:15:24.505 fused_ordering(951) 00:15:24.505 fused_ordering(952) 00:15:24.505 fused_ordering(953) 00:15:24.505 fused_ordering(954) 00:15:24.505 fused_ordering(955) 00:15:24.505 fused_ordering(956) 00:15:24.505 fused_ordering(957) 00:15:24.505 fused_ordering(958) 00:15:24.505 fused_ordering(959) 00:15:24.505 fused_ordering(960) 00:15:24.505 fused_ordering(961) 00:15:24.505 fused_ordering(962) 00:15:24.505 fused_ordering(963) 00:15:24.505 fused_ordering(964) 00:15:24.505 fused_ordering(965) 00:15:24.505 fused_ordering(966) 00:15:24.505 fused_ordering(967) 00:15:24.505 fused_ordering(968) 00:15:24.505 fused_ordering(969) 00:15:24.505 fused_ordering(970) 00:15:24.505 fused_ordering(971) 00:15:24.505 fused_ordering(972) 00:15:24.505 fused_ordering(973) 00:15:24.505 fused_ordering(974) 00:15:24.505 fused_ordering(975) 00:15:24.505 fused_ordering(976) 00:15:24.505 fused_ordering(977) 00:15:24.505 fused_ordering(978) 00:15:24.505 fused_ordering(979) 00:15:24.505 fused_ordering(980) 00:15:24.505 fused_ordering(981) 00:15:24.505 fused_ordering(982) 00:15:24.505 fused_ordering(983) 00:15:24.505 fused_ordering(984) 00:15:24.505 fused_ordering(985) 00:15:24.505 fused_ordering(986) 00:15:24.505 fused_ordering(987) 00:15:24.505 fused_ordering(988) 00:15:24.505 fused_ordering(989) 00:15:24.505 fused_ordering(990) 00:15:24.505 fused_ordering(991) 00:15:24.505 fused_ordering(992) 00:15:24.505 fused_ordering(993) 00:15:24.505 fused_ordering(994) 00:15:24.505 fused_ordering(995) 00:15:24.505 fused_ordering(996) 00:15:24.505 fused_ordering(997) 00:15:24.505 fused_ordering(998) 00:15:24.505 fused_ordering(999) 00:15:24.505 fused_ordering(1000) 00:15:24.505 fused_ordering(1001) 00:15:24.505 fused_ordering(1002) 00:15:24.505 fused_ordering(1003) 00:15:24.505 fused_ordering(1004) 00:15:24.505 fused_ordering(1005) 00:15:24.505 fused_ordering(1006) 00:15:24.505 fused_ordering(1007) 00:15:24.505 fused_ordering(1008) 00:15:24.505 fused_ordering(1009) 00:15:24.505 fused_ordering(1010) 00:15:24.505 fused_ordering(1011) 00:15:24.505 fused_ordering(1012) 00:15:24.505 fused_ordering(1013) 00:15:24.505 fused_ordering(1014) 00:15:24.505 fused_ordering(1015) 00:15:24.505 fused_ordering(1016) 00:15:24.505 fused_ordering(1017) 00:15:24.505 fused_ordering(1018) 00:15:24.505 fused_ordering(1019) 00:15:24.505 fused_ordering(1020) 00:15:24.505 fused_ordering(1021) 00:15:24.506 fused_ordering(1022) 00:15:24.506 fused_ordering(1023) 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.506 rmmod nvme_tcp 00:15:24.506 rmmod nvme_fabrics 00:15:24.506 rmmod nvme_keyring 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 84214 ']' 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 84214 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 84214 ']' 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 84214 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84214 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84214' 00:15:24.506 killing process with pid 84214 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 84214 00:15:24.506 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 84214 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.765 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:27.300 00:15:27.300 real 0m11.506s 00:15:27.300 user 0m5.242s 00:15:27.300 sys 0m6.443s 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 ************************************ 00:15:27.300 END TEST nvmf_fused_ordering 00:15:27.300 ************************************ 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.300 05:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 ************************************ 00:15:27.300 START TEST nvmf_ns_masking 00:15:27.301 ************************************ 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:27.301 * Looking for test storage... 00:15:27.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.301 --rc genhtml_branch_coverage=1 00:15:27.301 --rc genhtml_function_coverage=1 00:15:27.301 --rc genhtml_legend=1 00:15:27.301 --rc geninfo_all_blocks=1 00:15:27.301 --rc geninfo_unexecuted_blocks=1 00:15:27.301 00:15:27.301 ' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.301 --rc genhtml_branch_coverage=1 00:15:27.301 --rc genhtml_function_coverage=1 00:15:27.301 --rc genhtml_legend=1 00:15:27.301 --rc geninfo_all_blocks=1 00:15:27.301 --rc geninfo_unexecuted_blocks=1 00:15:27.301 00:15:27.301 ' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.301 --rc genhtml_branch_coverage=1 00:15:27.301 --rc genhtml_function_coverage=1 00:15:27.301 --rc genhtml_legend=1 00:15:27.301 --rc geninfo_all_blocks=1 00:15:27.301 --rc geninfo_unexecuted_blocks=1 00:15:27.301 00:15:27.301 ' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.301 --rc genhtml_branch_coverage=1 00:15:27.301 --rc genhtml_function_coverage=1 00:15:27.301 --rc genhtml_legend=1 00:15:27.301 --rc geninfo_all_blocks=1 00:15:27.301 --rc geninfo_unexecuted_blocks=1 00:15:27.301 00:15:27.301 ' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.301 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=93e16350-db0f-4d10-a42c-561cef21962f 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7adbed16-0fa3-41e2-bbe7-ee7c5a9c8f4e 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=be06c737-4c2f-4240-8a22-26fc6dbe42d8 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:27.302 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:33.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.870 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:33.871 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:33.871 Found net devices under 0000:af:00.0: cvl_0_0 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:33.871 Found net devices under 0000:af:00.1: cvl_0_1 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:33.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:15:33.871 00:15:33.871 --- 10.0.0.2 ping statistics --- 00:15:33.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.871 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:15:33.871 00:15:33.871 --- 10.0.0.1 ping statistics --- 00:15:33.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.871 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=88483 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 88483 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 88483 ']' 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.871 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.871 [2024-12-10 05:40:51.779807] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:15:33.871 [2024-12-10 05:40:51.779858] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.130 [2024-12-10 05:40:51.862395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.130 [2024-12-10 05:40:51.900110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.130 [2024-12-10 05:40:51.900144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.130 [2024-12-10 05:40:51.900151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.130 [2024-12-10 05:40:51.900158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.130 [2024-12-10 05:40:51.900163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.130 [2024-12-10 05:40:51.900698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.130 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.405 [2024-12-10 05:40:52.221775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.405 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:34.405 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:34.405 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:34.663 Malloc1 00:15:34.663 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:34.920 Malloc2 00:15:34.920 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:34.920 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:35.178 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.436 [2024-12-10 05:40:53.225531] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I be06c737-4c2f-4240-8a22-26fc6dbe42d8 -a 10.0.0.2 -s 4420 -i 4 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:35.436 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:37.967 [ 0]:0x1 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45bd91ed5108463d8096e6d9fa41e9f4 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45bd91ed5108463d8096e6d9fa41e9f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:37.967 [ 0]:0x1 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45bd91ed5108463d8096e6d9fa41e9f4 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45bd91ed5108463d8096e6d9fa41e9f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:37.967 [ 1]:0x2 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.967 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.225 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:38.483 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:38.483 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I be06c737-4c2f-4240-8a22-26fc6dbe42d8 -a 10.0.0.2 -s 4420 -i 4 00:15:38.741 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:38.741 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:38.741 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.741 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:38.741 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:38.741 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:40.641 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.642 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.900 [ 0]:0x2 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.900 [ 0]:0x1 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.900 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45bd91ed5108463d8096e6d9fa41e9f4 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45bd91ed5108463d8096e6d9fa41e9f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.158 [ 1]:0x2 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.158 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.416 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:41.417 [ 0]:0x2 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.417 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:41.674 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:41.674 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I be06c737-4c2f-4240-8a22-26fc6dbe42d8 -a 10.0.0.2 -s 4420 -i 4 00:15:41.931 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:41.931 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:41.931 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.931 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:41.931 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:41.931 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:43.829 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:44.087 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:44.087 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.087 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.087 [ 0]:0x1 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45bd91ed5108463d8096e6d9fa41e9f4 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45bd91ed5108463d8096e6d9fa41e9f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.088 [ 1]:0x2 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.088 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.347 [ 0]:0x2 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:44.347 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:44.606 [2024-12-10 05:41:02.423526] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:44.606 request: 00:15:44.606 { 00:15:44.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.606 "nsid": 2, 00:15:44.606 "host": "nqn.2016-06.io.spdk:host1", 00:15:44.606 "method": "nvmf_ns_remove_host", 00:15:44.606 "req_id": 1 00:15:44.606 } 00:15:44.606 Got JSON-RPC error response 00:15:44.606 response: 00:15:44.606 { 00:15:44.606 "code": -32602, 00:15:44.606 "message": "Invalid parameters" 00:15:44.606 } 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.606 [ 0]:0x2 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.606 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.864 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67e72c6d044048af9178b90d743dc8fc 00:15:44.864 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67e72c6d044048af9178b90d743dc8fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=90457 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 90457 /var/tmp/host.sock 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 90457 ']' 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:44.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.865 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:44.865 [2024-12-10 05:41:02.656236] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:15:44.865 [2024-12-10 05:41:02.656280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90457 ] 00:15:44.865 [2024-12-10 05:41:02.735054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.865 [2024-12-10 05:41:02.775312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.123 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.123 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:45.123 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.380 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:45.638 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 93e16350-db0f-4d10-a42c-561cef21962f 00:15:45.638 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:45.638 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 93E16350DB0F4D10A42C561CEF21962F -i 00:15:45.896 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7adbed16-0fa3-41e2-bbe7-ee7c5a9c8f4e 00:15:45.896 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:45.896 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7ADBED160FA341E2BBE7EE7C5A9C8F4E -i 00:15:45.896 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:46.153 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:46.416 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:46.416 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:46.673 nvme0n1 00:15:46.673 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:46.673 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:46.930 nvme1n2 00:15:46.930 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:46.930 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:46.930 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:46.930 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:46.930 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:47.188 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:47.188 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:47.188 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:47.188 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:47.444 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 93e16350-db0f-4d10-a42c-561cef21962f == \9\3\e\1\6\3\5\0\-\d\b\0\f\-\4\d\1\0\-\a\4\2\c\-\5\6\1\c\e\f\2\1\9\6\2\f ]] 00:15:47.444 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:47.444 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:47.444 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:47.702 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7adbed16-0fa3-41e2-bbe7-ee7c5a9c8f4e == \7\a\d\b\e\d\1\6\-\0\f\a\3\-\4\1\e\2\-\b\b\e\7\-\e\e\7\c\5\a\9\c\8\f\4\e ]] 00:15:47.702 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.702 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 93e16350-db0f-4d10-a42c-561cef21962f 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 93E16350DB0F4D10A42C561CEF21962F 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 93E16350DB0F4D10A42C561CEF21962F 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:47.960 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 93E16350DB0F4D10A42C561CEF21962F 00:15:48.218 [2024-12-10 05:41:05.949642] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:48.218 [2024-12-10 05:41:05.949673] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:48.218 [2024-12-10 05:41:05.949682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.218 request: 00:15:48.218 { 00:15:48.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.218 "namespace": { 00:15:48.218 "bdev_name": "invalid", 00:15:48.218 "nsid": 1, 00:15:48.218 "nguid": "93E16350DB0F4D10A42C561CEF21962F", 00:15:48.218 "no_auto_visible": false, 00:15:48.218 "hide_metadata": false 00:15:48.218 }, 00:15:48.218 "method": "nvmf_subsystem_add_ns", 00:15:48.218 "req_id": 1 00:15:48.218 } 00:15:48.218 Got JSON-RPC error response 00:15:48.218 response: 00:15:48.218 { 00:15:48.218 "code": -32602, 00:15:48.218 "message": "Invalid parameters" 00:15:48.218 } 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 93e16350-db0f-4d10-a42c-561cef21962f 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:48.218 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 93E16350DB0F4D10A42C561CEF21962F -i 00:15:48.218 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 90457 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 90457 ']' 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 90457 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90457 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90457' 00:15:50.745 killing process with pid 90457 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 90457 00:15:50.745 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 90457 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.003 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.003 rmmod nvme_tcp 00:15:51.261 rmmod nvme_fabrics 00:15:51.261 rmmod nvme_keyring 00:15:51.261 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 88483 ']' 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 88483 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 88483 ']' 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 88483 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88483 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88483' 00:15:51.261 killing process with pid 88483 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 88483 00:15:51.261 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 88483 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.520 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:53.423 00:15:53.423 real 0m26.587s 00:15:53.423 user 0m30.990s 00:15:53.423 sys 0m7.648s 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:53.423 ************************************ 00:15:53.423 END TEST nvmf_ns_masking 00:15:53.423 ************************************ 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.423 05:41:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.683 ************************************ 00:15:53.683 START TEST nvmf_nvme_cli 00:15:53.683 ************************************ 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:53.683 * Looking for test storage... 00:15:53.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.683 --rc genhtml_branch_coverage=1 00:15:53.683 --rc genhtml_function_coverage=1 00:15:53.683 --rc genhtml_legend=1 00:15:53.683 --rc geninfo_all_blocks=1 00:15:53.683 --rc geninfo_unexecuted_blocks=1 00:15:53.683 00:15:53.683 ' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.683 --rc genhtml_branch_coverage=1 00:15:53.683 --rc genhtml_function_coverage=1 00:15:53.683 --rc genhtml_legend=1 00:15:53.683 --rc geninfo_all_blocks=1 00:15:53.683 --rc geninfo_unexecuted_blocks=1 00:15:53.683 00:15:53.683 ' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.683 --rc genhtml_branch_coverage=1 00:15:53.683 --rc genhtml_function_coverage=1 00:15:53.683 --rc genhtml_legend=1 00:15:53.683 --rc geninfo_all_blocks=1 00:15:53.683 --rc geninfo_unexecuted_blocks=1 00:15:53.683 00:15:53.683 ' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.683 --rc genhtml_branch_coverage=1 00:15:53.683 --rc genhtml_function_coverage=1 00:15:53.683 --rc genhtml_legend=1 00:15:53.683 --rc geninfo_all_blocks=1 00:15:53.683 --rc geninfo_unexecuted_blocks=1 00:15:53.683 00:15:53.683 ' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.683 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.684 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.337 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.337 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:00.337 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:00.337 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:00.337 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:00.337 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:00.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:00.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:00.338 Found net devices under 0000:af:00.0: cvl_0_0 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:00.338 Found net devices under 0000:af:00.1: cvl_0_1 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.338 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:00.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:16:00.598 00:16:00.598 --- 10.0.0.2 ping statistics --- 00:16:00.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.598 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:00.598 00:16:00.598 --- 10.0.0.1 ping statistics --- 00:16:00.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.598 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=95419 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 95419 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 95419 ']' 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.598 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.598 [2024-12-10 05:41:18.533991] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:16:00.598 [2024-12-10 05:41:18.534034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.856 [2024-12-10 05:41:18.617622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.856 [2024-12-10 05:41:18.659776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.856 [2024-12-10 05:41:18.659813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.856 [2024-12-10 05:41:18.659819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.856 [2024-12-10 05:41:18.659826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.856 [2024-12-10 05:41:18.659830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.856 [2024-12-10 05:41:18.661391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.856 [2024-12-10 05:41:18.661500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.856 [2024-12-10 05:41:18.661605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.856 [2024-12-10 05:41:18.661606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.422 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.422 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:01.422 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.422 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.422 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 [2024-12-10 05:41:19.405220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 Malloc0 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 Malloc1 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 [2024-12-10 05:41:19.492552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.680 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:01.938 00:16:01.938 Discovery Log Number of Records 2, Generation counter 2 00:16:01.938 =====Discovery Log Entry 0====== 00:16:01.938 trtype: tcp 00:16:01.938 adrfam: ipv4 00:16:01.938 subtype: current discovery subsystem 00:16:01.938 treq: not required 00:16:01.938 portid: 0 00:16:01.938 trsvcid: 4420 00:16:01.938 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:01.938 traddr: 10.0.0.2 00:16:01.938 eflags: explicit discovery connections, duplicate discovery information 00:16:01.938 sectype: none 00:16:01.938 =====Discovery Log Entry 1====== 00:16:01.938 trtype: tcp 00:16:01.938 adrfam: ipv4 00:16:01.938 subtype: nvme subsystem 00:16:01.938 treq: not required 00:16:01.938 portid: 0 00:16:01.938 trsvcid: 4420 00:16:01.938 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:01.938 traddr: 10.0.0.2 00:16:01.938 eflags: none 00:16:01.938 sectype: none 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:16:01.938 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.311 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:03.311 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:03.311 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.311 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:03.311 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:03.311 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:05.210 /dev/nvme0n2 00:16:05.210 /dev/nvme1n1 00:16:05.210 /dev/nvme1n2 ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:16:05.210 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.210 rmmod nvme_tcp 00:16:05.210 rmmod nvme_fabrics 00:16:05.210 rmmod nvme_keyring 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 95419 ']' 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 95419 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 95419 ']' 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 95419 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.210 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95419 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95419' 00:16:05.470 killing process with pid 95419 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 95419 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 95419 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.470 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:08.004 00:16:08.004 real 0m14.068s 00:16:08.004 user 0m20.965s 00:16:08.004 sys 0m5.782s 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.004 ************************************ 00:16:08.004 END TEST nvmf_nvme_cli 00:16:08.004 ************************************ 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.004 ************************************ 00:16:08.004 START TEST nvmf_vfio_user 00:16:08.004 ************************************ 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:08.004 * Looking for test storage... 00:16:08.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.004 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:08.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.005 --rc genhtml_branch_coverage=1 00:16:08.005 --rc genhtml_function_coverage=1 00:16:08.005 --rc genhtml_legend=1 00:16:08.005 --rc geninfo_all_blocks=1 00:16:08.005 --rc geninfo_unexecuted_blocks=1 00:16:08.005 00:16:08.005 ' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:08.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.005 --rc genhtml_branch_coverage=1 00:16:08.005 --rc genhtml_function_coverage=1 00:16:08.005 --rc genhtml_legend=1 00:16:08.005 --rc geninfo_all_blocks=1 00:16:08.005 --rc geninfo_unexecuted_blocks=1 00:16:08.005 00:16:08.005 ' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:08.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.005 --rc genhtml_branch_coverage=1 00:16:08.005 --rc genhtml_function_coverage=1 00:16:08.005 --rc genhtml_legend=1 00:16:08.005 --rc geninfo_all_blocks=1 00:16:08.005 --rc geninfo_unexecuted_blocks=1 00:16:08.005 00:16:08.005 ' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:08.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.005 --rc genhtml_branch_coverage=1 00:16:08.005 --rc genhtml_function_coverage=1 00:16:08.005 --rc genhtml_legend=1 00:16:08.005 --rc geninfo_all_blocks=1 00:16:08.005 --rc geninfo_unexecuted_blocks=1 00:16:08.005 00:16:08.005 ' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=96705 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 96705' 00:16:08.005 Process pid: 96705 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 96705 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 96705 ']' 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.006 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.006 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:08.006 [2024-12-10 05:41:25.804793] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:16:08.006 [2024-12-10 05:41:25.804840] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.006 [2024-12-10 05:41:25.887696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.006 [2024-12-10 05:41:25.928341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.006 [2024-12-10 05:41:25.928384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.006 [2024-12-10 05:41:25.928391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.006 [2024-12-10 05:41:25.928397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.006 [2024-12-10 05:41:25.928402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.006 [2024-12-10 05:41:25.929934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.006 [2024-12-10 05:41:25.930045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.006 [2024-12-10 05:41:25.930153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.006 [2024-12-10 05:41:25.930154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.938 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.938 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:08.938 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:09.870 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:10.128 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:10.128 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:10.128 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.128 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:10.128 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:10.128 Malloc1 00:16:10.128 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:10.385 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:10.642 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:10.900 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.900 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:10.900 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:11.157 Malloc2 00:16:11.157 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:11.157 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:11.414 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:11.674 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:11.674 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:11.674 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:11.674 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:11.674 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:11.674 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:11.674 [2024-12-10 05:41:29.501186] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:16:11.674 [2024-12-10 05:41:29.501235] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97396 ] 00:16:11.674 [2024-12-10 05:41:29.542731] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:11.674 [2024-12-10 05:41:29.545065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:11.674 [2024-12-10 05:41:29.545087] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3af77f5000 00:16:11.674 [2024-12-10 05:41:29.546063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.547065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.548070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.549080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.550075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.551082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.552086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.553092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.674 [2024-12-10 05:41:29.554105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:11.674 [2024-12-10 05:41:29.554116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3af77ea000 00:16:11.674 [2024-12-10 05:41:29.555031] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:11.674 [2024-12-10 05:41:29.568482] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:11.674 [2024-12-10 05:41:29.568510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:11.675 [2024-12-10 05:41:29.574235] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:11.675 [2024-12-10 05:41:29.574270] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:11.675 [2024-12-10 05:41:29.574341] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:11.675 [2024-12-10 05:41:29.574356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:11.675 [2024-12-10 05:41:29.574364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:11.675 [2024-12-10 05:41:29.575226] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:11.675 [2024-12-10 05:41:29.575234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:11.675 [2024-12-10 05:41:29.575241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:11.675 [2024-12-10 05:41:29.576232] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:11.675 [2024-12-10 05:41:29.576240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:11.675 [2024-12-10 05:41:29.576246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:11.675 [2024-12-10 05:41:29.577235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:11.675 [2024-12-10 05:41:29.577243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:11.675 [2024-12-10 05:41:29.578243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:11.675 [2024-12-10 05:41:29.578250] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:11.675 [2024-12-10 05:41:29.578255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:11.675 [2024-12-10 05:41:29.578260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:11.675 [2024-12-10 05:41:29.578367] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:11.675 [2024-12-10 05:41:29.578371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:11.675 [2024-12-10 05:41:29.578376] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:11.675 [2024-12-10 05:41:29.579250] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:11.675 [2024-12-10 05:41:29.580253] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:11.675 [2024-12-10 05:41:29.581256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:11.675 [2024-12-10 05:41:29.582258] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:11.675 [2024-12-10 05:41:29.582322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:11.675 [2024-12-10 05:41:29.583270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:11.675 [2024-12-10 05:41:29.583277] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:11.675 [2024-12-10 05:41:29.583282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583302] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:11.675 [2024-12-10 05:41:29.583309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583319] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:11.675 [2024-12-10 05:41:29.583324] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.675 [2024-12-10 05:41:29.583327] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.675 [2024-12-10 05:41:29.583339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.675 [2024-12-10 05:41:29.583386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:11.675 [2024-12-10 05:41:29.583395] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:11.675 [2024-12-10 05:41:29.583399] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:11.675 [2024-12-10 05:41:29.583403] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:11.675 [2024-12-10 05:41:29.583407] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:11.675 [2024-12-10 05:41:29.583412] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:11.675 [2024-12-10 05:41:29.583415] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:11.675 [2024-12-10 05:41:29.583420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:11.675 [2024-12-10 05:41:29.583452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:11.675 [2024-12-10 05:41:29.583461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.675 [2024-12-10 05:41:29.583468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.675 [2024-12-10 05:41:29.583476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.675 [2024-12-10 05:41:29.583483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.675 [2024-12-10 05:41:29.583487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:11.675 [2024-12-10 05:41:29.583513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:11.675 [2024-12-10 05:41:29.583520] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:11.675 [2024-12-10 05:41:29.583525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:11.675 [2024-12-10 05:41:29.583543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:11.675 [2024-12-10 05:41:29.583558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583620] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:11.676 [2024-12-10 05:41:29.583624] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:11.676 [2024-12-10 05:41:29.583628] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.676 [2024-12-10 05:41:29.583633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583656] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:11.676 [2024-12-10 05:41:29.583667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583679] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:11.676 [2024-12-10 05:41:29.583683] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.676 [2024-12-10 05:41:29.583686] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.676 [2024-12-10 05:41:29.583691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583741] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:11.676 [2024-12-10 05:41:29.583744] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.676 [2024-12-10 05:41:29.583747] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.676 [2024-12-10 05:41:29.583754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583801] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:11.676 [2024-12-10 05:41:29.583805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:11.676 [2024-12-10 05:41:29.583809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:11.676 [2024-12-10 05:41:29.583825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583906] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:11.676 [2024-12-10 05:41:29.583910] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:11.676 [2024-12-10 05:41:29.583913] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:11.676 [2024-12-10 05:41:29.583916] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:11.676 [2024-12-10 05:41:29.583919] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:11.676 [2024-12-10 05:41:29.583924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:11.676 [2024-12-10 05:41:29.583931] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:11.676 [2024-12-10 05:41:29.583936] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:11.676 [2024-12-10 05:41:29.583939] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.676 [2024-12-10 05:41:29.583945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583951] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:11.676 [2024-12-10 05:41:29.583954] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.676 [2024-12-10 05:41:29.583957] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.676 [2024-12-10 05:41:29.583962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:11.676 [2024-12-10 05:41:29.583972] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:11.676 [2024-12-10 05:41:29.583975] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:11.676 [2024-12-10 05:41:29.583981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:11.676 [2024-12-10 05:41:29.583987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.583998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.584006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:11.676 [2024-12-10 05:41:29.584013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:11.676 ===================================================== 00:16:11.677 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:11.677 ===================================================== 00:16:11.677 Controller Capabilities/Features 00:16:11.677 ================================ 00:16:11.677 Vendor ID: 4e58 00:16:11.677 Subsystem Vendor ID: 4e58 00:16:11.677 Serial Number: SPDK1 00:16:11.677 Model Number: SPDK bdev Controller 00:16:11.677 Firmware Version: 25.01 00:16:11.677 Recommended Arb Burst: 6 00:16:11.677 IEEE OUI Identifier: 8d 6b 50 00:16:11.677 Multi-path I/O 00:16:11.677 May have multiple subsystem ports: Yes 00:16:11.677 May have multiple controllers: Yes 00:16:11.677 Associated with SR-IOV VF: No 00:16:11.677 Max Data Transfer Size: 131072 00:16:11.677 Max Number of Namespaces: 32 00:16:11.677 Max Number of I/O Queues: 127 00:16:11.677 NVMe Specification Version (VS): 1.3 00:16:11.677 NVMe Specification Version (Identify): 1.3 00:16:11.677 Maximum Queue Entries: 256 00:16:11.677 Contiguous Queues Required: Yes 00:16:11.677 Arbitration Mechanisms Supported 00:16:11.677 Weighted Round Robin: Not Supported 00:16:11.677 Vendor Specific: Not Supported 00:16:11.677 Reset Timeout: 15000 ms 00:16:11.677 Doorbell Stride: 4 bytes 00:16:11.677 NVM Subsystem Reset: Not Supported 00:16:11.677 Command Sets Supported 00:16:11.677 NVM Command Set: Supported 00:16:11.677 Boot Partition: Not Supported 00:16:11.677 Memory Page Size Minimum: 4096 bytes 00:16:11.677 Memory Page Size Maximum: 4096 bytes 00:16:11.677 Persistent Memory Region: Not Supported 00:16:11.677 Optional Asynchronous Events Supported 00:16:11.677 Namespace Attribute Notices: Supported 00:16:11.677 Firmware Activation Notices: Not Supported 00:16:11.677 ANA Change Notices: Not Supported 00:16:11.677 PLE Aggregate Log Change Notices: Not Supported 00:16:11.677 LBA Status Info Alert Notices: Not Supported 00:16:11.677 EGE Aggregate Log Change Notices: Not Supported 00:16:11.677 Normal NVM Subsystem Shutdown event: Not Supported 00:16:11.677 Zone Descriptor Change Notices: Not Supported 00:16:11.677 Discovery Log Change Notices: Not Supported 00:16:11.677 Controller Attributes 00:16:11.677 128-bit Host Identifier: Supported 00:16:11.677 Non-Operational Permissive Mode: Not Supported 00:16:11.677 NVM Sets: Not Supported 00:16:11.677 Read Recovery Levels: Not Supported 00:16:11.677 Endurance Groups: Not Supported 00:16:11.677 Predictable Latency Mode: Not Supported 00:16:11.677 Traffic Based Keep ALive: Not Supported 00:16:11.677 Namespace Granularity: Not Supported 00:16:11.677 SQ Associations: Not Supported 00:16:11.677 UUID List: Not Supported 00:16:11.677 Multi-Domain Subsystem: Not Supported 00:16:11.677 Fixed Capacity Management: Not Supported 00:16:11.677 Variable Capacity Management: Not Supported 00:16:11.677 Delete Endurance Group: Not Supported 00:16:11.677 Delete NVM Set: Not Supported 00:16:11.677 Extended LBA Formats Supported: Not Supported 00:16:11.677 Flexible Data Placement Supported: Not Supported 00:16:11.677 00:16:11.677 Controller Memory Buffer Support 00:16:11.677 ================================ 00:16:11.677 Supported: No 00:16:11.677 00:16:11.677 Persistent Memory Region Support 00:16:11.677 ================================ 00:16:11.677 Supported: No 00:16:11.677 00:16:11.677 Admin Command Set Attributes 00:16:11.677 ============================ 00:16:11.677 Security Send/Receive: Not Supported 00:16:11.677 Format NVM: Not Supported 00:16:11.677 Firmware Activate/Download: Not Supported 00:16:11.677 Namespace Management: Not Supported 00:16:11.677 Device Self-Test: Not Supported 00:16:11.677 Directives: Not Supported 00:16:11.677 NVMe-MI: Not Supported 00:16:11.677 Virtualization Management: Not Supported 00:16:11.677 Doorbell Buffer Config: Not Supported 00:16:11.677 Get LBA Status Capability: Not Supported 00:16:11.677 Command & Feature Lockdown Capability: Not Supported 00:16:11.677 Abort Command Limit: 4 00:16:11.677 Async Event Request Limit: 4 00:16:11.677 Number of Firmware Slots: N/A 00:16:11.677 Firmware Slot 1 Read-Only: N/A 00:16:11.677 Firmware Activation Without Reset: N/A 00:16:11.677 Multiple Update Detection Support: N/A 00:16:11.677 Firmware Update Granularity: No Information Provided 00:16:11.677 Per-Namespace SMART Log: No 00:16:11.677 Asymmetric Namespace Access Log Page: Not Supported 00:16:11.677 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:11.677 Command Effects Log Page: Supported 00:16:11.677 Get Log Page Extended Data: Supported 00:16:11.677 Telemetry Log Pages: Not Supported 00:16:11.677 Persistent Event Log Pages: Not Supported 00:16:11.677 Supported Log Pages Log Page: May Support 00:16:11.677 Commands Supported & Effects Log Page: Not Supported 00:16:11.677 Feature Identifiers & Effects Log Page:May Support 00:16:11.677 NVMe-MI Commands & Effects Log Page: May Support 00:16:11.677 Data Area 4 for Telemetry Log: Not Supported 00:16:11.677 Error Log Page Entries Supported: 128 00:16:11.677 Keep Alive: Supported 00:16:11.677 Keep Alive Granularity: 10000 ms 00:16:11.677 00:16:11.677 NVM Command Set Attributes 00:16:11.677 ========================== 00:16:11.677 Submission Queue Entry Size 00:16:11.677 Max: 64 00:16:11.677 Min: 64 00:16:11.677 Completion Queue Entry Size 00:16:11.677 Max: 16 00:16:11.677 Min: 16 00:16:11.677 Number of Namespaces: 32 00:16:11.677 Compare Command: Supported 00:16:11.677 Write Uncorrectable Command: Not Supported 00:16:11.677 Dataset Management Command: Supported 00:16:11.677 Write Zeroes Command: Supported 00:16:11.677 Set Features Save Field: Not Supported 00:16:11.677 Reservations: Not Supported 00:16:11.677 Timestamp: Not Supported 00:16:11.677 Copy: Supported 00:16:11.677 Volatile Write Cache: Present 00:16:11.677 Atomic Write Unit (Normal): 1 00:16:11.677 Atomic Write Unit (PFail): 1 00:16:11.677 Atomic Compare & Write Unit: 1 00:16:11.677 Fused Compare & Write: Supported 00:16:11.677 Scatter-Gather List 00:16:11.677 SGL Command Set: Supported (Dword aligned) 00:16:11.677 SGL Keyed: Not Supported 00:16:11.677 SGL Bit Bucket Descriptor: Not Supported 00:16:11.678 SGL Metadata Pointer: Not Supported 00:16:11.678 Oversized SGL: Not Supported 00:16:11.678 SGL Metadata Address: Not Supported 00:16:11.678 SGL Offset: Not Supported 00:16:11.678 Transport SGL Data Block: Not Supported 00:16:11.678 Replay Protected Memory Block: Not Supported 00:16:11.678 00:16:11.678 Firmware Slot Information 00:16:11.678 ========================= 00:16:11.678 Active slot: 1 00:16:11.678 Slot 1 Firmware Revision: 25.01 00:16:11.678 00:16:11.678 00:16:11.678 Commands Supported and Effects 00:16:11.678 ============================== 00:16:11.678 Admin Commands 00:16:11.678 -------------- 00:16:11.678 Get Log Page (02h): Supported 00:16:11.678 Identify (06h): Supported 00:16:11.678 Abort (08h): Supported 00:16:11.678 Set Features (09h): Supported 00:16:11.678 Get Features (0Ah): Supported 00:16:11.678 Asynchronous Event Request (0Ch): Supported 00:16:11.678 Keep Alive (18h): Supported 00:16:11.678 I/O Commands 00:16:11.678 ------------ 00:16:11.678 Flush (00h): Supported LBA-Change 00:16:11.678 Write (01h): Supported LBA-Change 00:16:11.678 Read (02h): Supported 00:16:11.678 Compare (05h): Supported 00:16:11.678 Write Zeroes (08h): Supported LBA-Change 00:16:11.678 Dataset Management (09h): Supported LBA-Change 00:16:11.678 Copy (19h): Supported LBA-Change 00:16:11.678 00:16:11.678 Error Log 00:16:11.678 ========= 00:16:11.678 00:16:11.678 Arbitration 00:16:11.678 =========== 00:16:11.678 Arbitration Burst: 1 00:16:11.678 00:16:11.678 Power Management 00:16:11.678 ================ 00:16:11.678 Number of Power States: 1 00:16:11.678 Current Power State: Power State #0 00:16:11.678 Power State #0: 00:16:11.678 Max Power: 0.00 W 00:16:11.678 Non-Operational State: Operational 00:16:11.678 Entry Latency: Not Reported 00:16:11.678 Exit Latency: Not Reported 00:16:11.678 Relative Read Throughput: 0 00:16:11.678 Relative Read Latency: 0 00:16:11.678 Relative Write Throughput: 0 00:16:11.678 Relative Write Latency: 0 00:16:11.678 Idle Power: Not Reported 00:16:11.678 Active Power: Not Reported 00:16:11.678 Non-Operational Permissive Mode: Not Supported 00:16:11.678 00:16:11.678 Health Information 00:16:11.678 ================== 00:16:11.678 Critical Warnings: 00:16:11.678 Available Spare Space: OK 00:16:11.678 Temperature: OK 00:16:11.678 Device Reliability: OK 00:16:11.678 Read Only: No 00:16:11.678 Volatile Memory Backup: OK 00:16:11.678 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:11.678 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:11.678 Available Spare: 0% 00:16:11.678 Available Sp[2024-12-10 05:41:29.584091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:11.678 [2024-12-10 05:41:29.584102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:11.678 [2024-12-10 05:41:29.584125] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:11.678 [2024-12-10 05:41:29.584134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.678 [2024-12-10 05:41:29.584139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.678 [2024-12-10 05:41:29.584145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.678 [2024-12-10 05:41:29.584150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.678 [2024-12-10 05:41:29.584276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:11.678 [2024-12-10 05:41:29.584286] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:11.678 [2024-12-10 05:41:29.585281] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:11.678 [2024-12-10 05:41:29.585328] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:11.678 [2024-12-10 05:41:29.585334] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:11.678 [2024-12-10 05:41:29.586286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:11.678 [2024-12-10 05:41:29.586298] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:11.678 [2024-12-10 05:41:29.586348] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:11.678 [2024-12-10 05:41:29.587311] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:11.678 are Threshold: 0% 00:16:11.678 Life Percentage Used: 0% 00:16:11.678 Data Units Read: 0 00:16:11.678 Data Units Written: 0 00:16:11.678 Host Read Commands: 0 00:16:11.678 Host Write Commands: 0 00:16:11.678 Controller Busy Time: 0 minutes 00:16:11.678 Power Cycles: 0 00:16:11.678 Power On Hours: 0 hours 00:16:11.678 Unsafe Shutdowns: 0 00:16:11.678 Unrecoverable Media Errors: 0 00:16:11.678 Lifetime Error Log Entries: 0 00:16:11.678 Warning Temperature Time: 0 minutes 00:16:11.678 Critical Temperature Time: 0 minutes 00:16:11.678 00:16:11.678 Number of Queues 00:16:11.678 ================ 00:16:11.678 Number of I/O Submission Queues: 127 00:16:11.678 Number of I/O Completion Queues: 127 00:16:11.678 00:16:11.678 Active Namespaces 00:16:11.678 ================= 00:16:11.678 Namespace ID:1 00:16:11.678 Error Recovery Timeout: Unlimited 00:16:11.678 Command Set Identifier: NVM (00h) 00:16:11.678 Deallocate: Supported 00:16:11.678 Deallocated/Unwritten Error: Not Supported 00:16:11.678 Deallocated Read Value: Unknown 00:16:11.678 Deallocate in Write Zeroes: Not Supported 00:16:11.678 Deallocated Guard Field: 0xFFFF 00:16:11.678 Flush: Supported 00:16:11.678 Reservation: Supported 00:16:11.678 Namespace Sharing Capabilities: Multiple Controllers 00:16:11.678 Size (in LBAs): 131072 (0GiB) 00:16:11.678 Capacity (in LBAs): 131072 (0GiB) 00:16:11.678 Utilization (in LBAs): 131072 (0GiB) 00:16:11.678 NGUID: 845AF6A01B5C4017974099D7F6BF0D2E 00:16:11.678 UUID: 845af6a0-1b5c-4017-9740-99d7f6bf0d2e 00:16:11.678 Thin Provisioning: Not Supported 00:16:11.678 Per-NS Atomic Units: Yes 00:16:11.678 Atomic Boundary Size (Normal): 0 00:16:11.678 Atomic Boundary Size (PFail): 0 00:16:11.678 Atomic Boundary Offset: 0 00:16:11.678 Maximum Single Source Range Length: 65535 00:16:11.678 Maximum Copy Length: 65535 00:16:11.678 Maximum Source Range Count: 1 00:16:11.678 NGUID/EUI64 Never Reused: No 00:16:11.678 Namespace Write Protected: No 00:16:11.678 Number of LBA Formats: 1 00:16:11.678 Current LBA Format: LBA Format #00 00:16:11.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:11.678 00:16:11.678 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:11.937 [2024-12-10 05:41:29.815040] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:17.206 Initializing NVMe Controllers 00:16:17.206 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:17.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:17.206 Initialization complete. Launching workers. 00:16:17.206 ======================================================== 00:16:17.206 Latency(us) 00:16:17.206 Device Information : IOPS MiB/s Average min max 00:16:17.206 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39930.78 155.98 3205.78 961.78 6662.54 00:16:17.206 ======================================================== 00:16:17.206 Total : 39930.78 155.98 3205.78 961.78 6662.54 00:16:17.206 00:16:17.206 [2024-12-10 05:41:34.836850] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:17.206 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:17.206 [2024-12-10 05:41:35.070922] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:22.474 Initializing NVMe Controllers 00:16:22.474 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:22.474 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:22.474 Initialization complete. Launching workers. 00:16:22.474 ======================================================== 00:16:22.474 Latency(us) 00:16:22.474 Device Information : IOPS MiB/s Average min max 00:16:22.474 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.59 62.74 7975.17 5983.71 9975.22 00:16:22.474 ======================================================== 00:16:22.474 Total : 16060.59 62.74 7975.17 5983.71 9975.22 00:16:22.474 00:16:22.474 [2024-12-10 05:41:40.109794] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:22.474 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:22.474 [2024-12-10 05:41:40.323798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:27.743 [2024-12-10 05:41:45.406617] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:27.743 Initializing NVMe Controllers 00:16:27.743 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:27.743 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:27.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:27.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:27.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:27.743 Initialization complete. Launching workers. 00:16:27.743 Starting thread on core 2 00:16:27.743 Starting thread on core 3 00:16:27.743 Starting thread on core 1 00:16:27.743 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:28.001 [2024-12-10 05:41:45.719654] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:31.289 [2024-12-10 05:41:48.792209] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:31.289 Initializing NVMe Controllers 00:16:31.289 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.289 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.289 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:31.289 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:31.289 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:31.289 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:31.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:31.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:31.289 Initialization complete. Launching workers. 00:16:31.289 Starting thread on core 1 with urgent priority queue 00:16:31.289 Starting thread on core 2 with urgent priority queue 00:16:31.289 Starting thread on core 3 with urgent priority queue 00:16:31.289 Starting thread on core 0 with urgent priority queue 00:16:31.289 SPDK bdev Controller (SPDK1 ) core 0: 7804.00 IO/s 12.81 secs/100000 ios 00:16:31.289 SPDK bdev Controller (SPDK1 ) core 1: 7754.00 IO/s 12.90 secs/100000 ios 00:16:31.289 SPDK bdev Controller (SPDK1 ) core 2: 10128.00 IO/s 9.87 secs/100000 ios 00:16:31.289 SPDK bdev Controller (SPDK1 ) core 3: 7624.33 IO/s 13.12 secs/100000 ios 00:16:31.289 ======================================================== 00:16:31.289 00:16:31.289 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:31.289 [2024-12-10 05:41:49.085641] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:31.289 Initializing NVMe Controllers 00:16:31.289 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.289 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.289 Namespace ID: 1 size: 0GB 00:16:31.289 Initialization complete. 00:16:31.289 INFO: using host memory buffer for IO 00:16:31.289 Hello world! 00:16:31.289 [2024-12-10 05:41:49.118877] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:31.289 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:31.548 [2024-12-10 05:41:49.405603] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:32.483 Initializing NVMe Controllers 00:16:32.483 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.483 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.483 Initialization complete. Launching workers. 00:16:32.483 submit (in ns) avg, min, max = 5852.0, 3201.0, 4000212.4 00:16:32.483 complete (in ns) avg, min, max = 21691.4, 1753.3, 4013794.3 00:16:32.483 00:16:32.483 Submit histogram 00:16:32.483 ================ 00:16:32.483 Range in us Cumulative Count 00:16:32.483 3.200 - 3.215: 0.2582% ( 42) 00:16:32.483 3.215 - 3.230: 2.1083% ( 301) 00:16:32.483 3.230 - 3.246: 6.8904% ( 778) 00:16:32.483 3.246 - 3.261: 12.4101% ( 898) 00:16:32.483 3.261 - 3.276: 18.1880% ( 940) 00:16:32.483 3.276 - 3.291: 24.8571% ( 1085) 00:16:32.483 3.291 - 3.307: 31.5508% ( 1089) 00:16:32.483 3.307 - 3.322: 37.6667% ( 995) 00:16:32.483 3.322 - 3.337: 43.4446% ( 940) 00:16:32.483 3.337 - 3.352: 48.9397% ( 894) 00:16:32.483 3.352 - 3.368: 54.2566% ( 865) 00:16:32.483 3.368 - 3.383: 61.2207% ( 1133) 00:16:32.483 3.383 - 3.398: 69.0885% ( 1280) 00:16:32.483 3.398 - 3.413: 73.9074% ( 784) 00:16:32.483 3.413 - 3.429: 79.1382% ( 851) 00:16:32.483 3.429 - 3.444: 82.5005% ( 547) 00:16:32.483 3.444 - 3.459: 84.8546% ( 383) 00:16:32.483 3.459 - 3.474: 86.1086% ( 204) 00:16:32.483 3.474 - 3.490: 86.9076% ( 130) 00:16:32.483 3.490 - 3.505: 87.4608% ( 90) 00:16:32.483 3.505 - 3.520: 87.8726% ( 67) 00:16:32.483 3.520 - 3.535: 88.5488% ( 110) 00:16:32.483 3.535 - 3.550: 89.3601% ( 132) 00:16:32.483 3.550 - 3.566: 90.3190% ( 156) 00:16:32.483 3.566 - 3.581: 91.3824% ( 173) 00:16:32.483 3.581 - 3.596: 92.4273% ( 170) 00:16:32.483 3.596 - 3.611: 93.4169% ( 161) 00:16:32.483 3.611 - 3.627: 94.3697% ( 155) 00:16:32.483 3.627 - 3.642: 95.3900% ( 166) 00:16:32.483 3.642 - 3.657: 96.0723% ( 111) 00:16:32.483 3.657 - 3.672: 96.9820% ( 148) 00:16:32.483 3.672 - 3.688: 97.6151% ( 103) 00:16:32.483 3.688 - 3.703: 98.1130% ( 81) 00:16:32.483 3.703 - 3.718: 98.4387% ( 53) 00:16:32.483 3.718 - 3.733: 98.7645% ( 53) 00:16:32.483 3.733 - 3.749: 98.9797% ( 35) 00:16:32.483 3.749 - 3.764: 99.1763% ( 32) 00:16:32.483 3.764 - 3.779: 99.3423% ( 27) 00:16:32.483 3.779 - 3.794: 99.4099% ( 11) 00:16:32.483 3.794 - 3.810: 99.4960% ( 14) 00:16:32.483 3.810 - 3.825: 99.5083% ( 2) 00:16:32.483 3.825 - 3.840: 99.5144% ( 1) 00:16:32.483 3.840 - 3.855: 99.5390% ( 4) 00:16:32.483 3.855 - 3.870: 99.5451% ( 1) 00:16:32.483 3.901 - 3.931: 99.5513% ( 1) 00:16:32.483 3.931 - 3.962: 99.5574% ( 1) 00:16:32.483 4.023 - 4.053: 99.5636% ( 1) 00:16:32.483 4.084 - 4.114: 99.5697% ( 1) 00:16:32.483 5.242 - 5.272: 99.5759% ( 1) 00:16:32.483 5.272 - 5.303: 99.5820% ( 1) 00:16:32.483 5.303 - 5.333: 99.5882% ( 1) 00:16:32.483 5.394 - 5.425: 99.5943% ( 1) 00:16:32.483 5.455 - 5.486: 99.6128% ( 3) 00:16:32.483 5.547 - 5.577: 99.6189% ( 1) 00:16:32.483 5.851 - 5.882: 99.6312% ( 2) 00:16:32.483 5.912 - 5.943: 99.6373% ( 1) 00:16:32.483 5.973 - 6.004: 99.6435% ( 1) 00:16:32.483 6.034 - 6.065: 99.6496% ( 1) 00:16:32.483 6.156 - 6.187: 99.6681% ( 3) 00:16:32.483 6.217 - 6.248: 99.6742% ( 1) 00:16:32.483 6.248 - 6.278: 99.6804% ( 1) 00:16:32.483 6.309 - 6.339: 99.6865% ( 1) 00:16:32.483 6.430 - 6.461: 99.6988% ( 2) 00:16:32.483 6.491 - 6.522: 99.7111% ( 2) 00:16:32.483 6.552 - 6.583: 99.7173% ( 1) 00:16:32.483 6.583 - 6.613: 99.7295% ( 2) 00:16:32.483 6.644 - 6.674: 99.7357% ( 1) 00:16:32.483 6.705 - 6.735: 99.7418% ( 1) 00:16:32.483 6.827 - 6.857: 99.7480% ( 1) 00:16:32.483 6.888 - 6.918: 99.7541% ( 1) 00:16:32.483 7.040 - 7.070: 99.7603% ( 1) 00:16:32.483 7.101 - 7.131: 99.7664% ( 1) 00:16:32.483 7.284 - 7.314: 99.7726% ( 1) 00:16:32.483 7.314 - 7.345: 99.7787% ( 1) 00:16:32.483 7.345 - 7.375: 99.7849% ( 1) 00:16:32.483 7.436 - 7.467: 99.7910% ( 1) 00:16:32.483 7.497 - 7.528: 99.7972% ( 1) 00:16:32.483 7.558 - 7.589: 99.8033% ( 1) 00:16:32.483 7.589 - 7.619: 99.8095% ( 1) 00:16:32.484 [2024-12-10 05:41:50.427676] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:32.742 7.710 - 7.741: 99.8156% ( 1) 00:16:32.742 7.771 - 7.802: 99.8217% ( 1) 00:16:32.742 7.802 - 7.863: 99.8402% ( 3) 00:16:32.742 7.863 - 7.924: 99.8525% ( 2) 00:16:32.742 7.924 - 7.985: 99.8586% ( 1) 00:16:32.742 7.985 - 8.046: 99.8709% ( 2) 00:16:32.742 8.046 - 8.107: 99.8832% ( 2) 00:16:32.742 8.168 - 8.229: 99.8955% ( 2) 00:16:32.742 8.290 - 8.350: 99.9017% ( 1) 00:16:32.742 8.472 - 8.533: 99.9078% ( 1) 00:16:32.742 8.594 - 8.655: 99.9139% ( 1) 00:16:32.742 8.899 - 8.960: 99.9201% ( 1) 00:16:32.742 13.166 - 13.227: 99.9262% ( 1) 00:16:32.742 13.714 - 13.775: 99.9324% ( 1) 00:16:32.742 153.112 - 154.088: 99.9385% ( 1) 00:16:32.742 3994.575 - 4025.783: 100.0000% ( 10) 00:16:32.742 00:16:32.742 Complete histogram 00:16:32.742 ================== 00:16:32.742 Range in us Cumulative Count 00:16:32.742 1.752 - 1.760: 0.6024% ( 98) 00:16:32.742 1.760 - 1.768: 9.5396% ( 1454) 00:16:32.742 1.768 - 1.775: 35.2818% ( 4188) 00:16:32.742 1.775 - 1.783: 58.5592% ( 3787) 00:16:32.742 1.783 - 1.790: 67.8837% ( 1517) 00:16:32.742 1.790 - 1.798: 71.7377% ( 627) 00:16:32.742 1.798 - 1.806: 74.4545% ( 442) 00:16:32.742 1.806 - 1.813: 76.3108% ( 302) 00:16:32.742 1.813 - 1.821: 79.6115% ( 537) 00:16:32.742 1.821 - 1.829: 85.7951% ( 1006) 00:16:32.742 1.829 - 1.836: 91.0259% ( 851) 00:16:32.742 1.836 - 1.844: 93.9394% ( 474) 00:16:32.742 1.844 - 1.851: 95.6297% ( 275) 00:16:32.742 1.851 - 1.859: 96.9021% ( 207) 00:16:32.742 1.859 - 1.867: 97.5659% ( 108) 00:16:32.742 1.867 - 1.874: 97.9101% ( 56) 00:16:32.742 1.874 - 1.882: 98.1990% ( 47) 00:16:32.742 1.882 - 1.890: 98.4142% ( 35) 00:16:32.742 1.890 - 1.897: 98.6354% ( 36) 00:16:32.742 1.897 - 1.905: 98.7891% ( 25) 00:16:32.742 1.905 - 1.912: 98.9059% ( 19) 00:16:32.742 1.912 - 1.920: 99.0104% ( 17) 00:16:32.742 1.920 - 1.928: 99.0719% ( 10) 00:16:32.742 1.928 - 1.935: 99.1087% ( 6) 00:16:32.742 1.935 - 1.943: 99.1149% ( 1) 00:16:32.743 1.943 - 1.950: 99.1210% ( 1) 00:16:32.743 1.950 - 1.966: 99.1333% ( 2) 00:16:32.743 1.966 - 1.981: 99.1518% ( 3) 00:16:32.743 1.981 - 1.996: 99.1702% ( 3) 00:16:32.743 1.996 - 2.011: 99.1948% ( 4) 00:16:32.743 2.027 - 2.042: 99.2009% ( 1) 00:16:32.743 2.057 - 2.072: 99.2685% ( 11) 00:16:32.743 2.072 - 2.088: 99.3177% ( 8) 00:16:32.743 2.088 - 2.103: 99.3362% ( 3) 00:16:32.743 2.103 - 2.118: 99.3423% ( 1) 00:16:32.743 2.118 - 2.133: 99.3485% ( 1) 00:16:32.743 2.149 - 2.164: 99.3546% ( 1) 00:16:32.743 4.297 - 4.328: 99.3607% ( 1) 00:16:32.743 4.602 - 4.632: 99.3669% ( 1) 00:16:32.743 4.663 - 4.693: 99.3730% ( 1) 00:16:32.743 4.785 - 4.815: 99.3792% ( 1) 00:16:32.743 4.846 - 4.876: 99.3853% ( 1) 00:16:32.743 4.937 - 4.968: 99.3915% ( 1) 00:16:32.743 4.998 - 5.029: 99.4099% ( 3) 00:16:32.743 5.272 - 5.303: 99.4284% ( 3) 00:16:32.743 5.394 - 5.425: 99.4345% ( 1) 00:16:32.743 5.516 - 5.547: 99.4407% ( 1) 00:16:32.743 5.821 - 5.851: 99.4468% ( 1) 00:16:32.743 6.004 - 6.034: 99.4529% ( 1) 00:16:32.743 6.278 - 6.309: 99.4591% ( 1) 00:16:32.743 6.461 - 6.491: 99.4652% ( 1) 00:16:32.743 6.705 - 6.735: 99.4714% ( 1) 00:16:32.743 7.710 - 7.741: 99.4775% ( 1) 00:16:32.743 8.229 - 8.290: 99.4837% ( 1) 00:16:32.743 9.691 - 9.752: 99.4898% ( 1) 00:16:32.743 14.811 - 14.872: 99.4960% ( 1) 00:16:32.743 39.010 - 39.253: 99.5021% ( 1) 00:16:32.743 3994.575 - 4025.783: 100.0000% ( 81) 00:16:32.743 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:32.743 [ 00:16:32.743 { 00:16:32.743 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:32.743 "subtype": "Discovery", 00:16:32.743 "listen_addresses": [], 00:16:32.743 "allow_any_host": true, 00:16:32.743 "hosts": [] 00:16:32.743 }, 00:16:32.743 { 00:16:32.743 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:32.743 "subtype": "NVMe", 00:16:32.743 "listen_addresses": [ 00:16:32.743 { 00:16:32.743 "trtype": "VFIOUSER", 00:16:32.743 "adrfam": "IPv4", 00:16:32.743 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:32.743 "trsvcid": "0" 00:16:32.743 } 00:16:32.743 ], 00:16:32.743 "allow_any_host": true, 00:16:32.743 "hosts": [], 00:16:32.743 "serial_number": "SPDK1", 00:16:32.743 "model_number": "SPDK bdev Controller", 00:16:32.743 "max_namespaces": 32, 00:16:32.743 "min_cntlid": 1, 00:16:32.743 "max_cntlid": 65519, 00:16:32.743 "namespaces": [ 00:16:32.743 { 00:16:32.743 "nsid": 1, 00:16:32.743 "bdev_name": "Malloc1", 00:16:32.743 "name": "Malloc1", 00:16:32.743 "nguid": "845AF6A01B5C4017974099D7F6BF0D2E", 00:16:32.743 "uuid": "845af6a0-1b5c-4017-9740-99d7f6bf0d2e" 00:16:32.743 } 00:16:32.743 ] 00:16:32.743 }, 00:16:32.743 { 00:16:32.743 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:32.743 "subtype": "NVMe", 00:16:32.743 "listen_addresses": [ 00:16:32.743 { 00:16:32.743 "trtype": "VFIOUSER", 00:16:32.743 "adrfam": "IPv4", 00:16:32.743 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:32.743 "trsvcid": "0" 00:16:32.743 } 00:16:32.743 ], 00:16:32.743 "allow_any_host": true, 00:16:32.743 "hosts": [], 00:16:32.743 "serial_number": "SPDK2", 00:16:32.743 "model_number": "SPDK bdev Controller", 00:16:32.743 "max_namespaces": 32, 00:16:32.743 "min_cntlid": 1, 00:16:32.743 "max_cntlid": 65519, 00:16:32.743 "namespaces": [ 00:16:32.743 { 00:16:32.743 "nsid": 1, 00:16:32.743 "bdev_name": "Malloc2", 00:16:32.743 "name": "Malloc2", 00:16:32.743 "nguid": "512079B105694936A03A5B89032A5F40", 00:16:32.743 "uuid": "512079b1-0569-4936-a03a-5b89032a5f40" 00:16:32.743 } 00:16:32.743 ] 00:16:32.743 } 00:16:32.743 ] 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=100804 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:32.743 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:33.002 [2024-12-10 05:41:50.824619] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.002 Malloc3 00:16:33.002 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:33.260 [2024-12-10 05:41:51.087665] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.260 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.260 Asynchronous Event Request test 00:16:33.260 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.260 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.260 Registering asynchronous event callbacks... 00:16:33.260 Starting namespace attribute notice tests for all controllers... 00:16:33.260 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:33.260 aer_cb - Changed Namespace 00:16:33.260 Cleaning up... 00:16:33.519 [ 00:16:33.519 { 00:16:33.519 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:33.519 "subtype": "Discovery", 00:16:33.519 "listen_addresses": [], 00:16:33.519 "allow_any_host": true, 00:16:33.519 "hosts": [] 00:16:33.519 }, 00:16:33.519 { 00:16:33.519 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:33.519 "subtype": "NVMe", 00:16:33.519 "listen_addresses": [ 00:16:33.519 { 00:16:33.519 "trtype": "VFIOUSER", 00:16:33.519 "adrfam": "IPv4", 00:16:33.519 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:33.519 "trsvcid": "0" 00:16:33.519 } 00:16:33.519 ], 00:16:33.519 "allow_any_host": true, 00:16:33.519 "hosts": [], 00:16:33.519 "serial_number": "SPDK1", 00:16:33.519 "model_number": "SPDK bdev Controller", 00:16:33.519 "max_namespaces": 32, 00:16:33.519 "min_cntlid": 1, 00:16:33.519 "max_cntlid": 65519, 00:16:33.519 "namespaces": [ 00:16:33.519 { 00:16:33.519 "nsid": 1, 00:16:33.519 "bdev_name": "Malloc1", 00:16:33.519 "name": "Malloc1", 00:16:33.519 "nguid": "845AF6A01B5C4017974099D7F6BF0D2E", 00:16:33.519 "uuid": "845af6a0-1b5c-4017-9740-99d7f6bf0d2e" 00:16:33.519 }, 00:16:33.519 { 00:16:33.519 "nsid": 2, 00:16:33.519 "bdev_name": "Malloc3", 00:16:33.520 "name": "Malloc3", 00:16:33.520 "nguid": "720D2B97B7BF47EBA8C1E405C6637903", 00:16:33.520 "uuid": "720d2b97-b7bf-47eb-a8c1-e405c6637903" 00:16:33.520 } 00:16:33.520 ] 00:16:33.520 }, 00:16:33.520 { 00:16:33.520 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:33.520 "subtype": "NVMe", 00:16:33.520 "listen_addresses": [ 00:16:33.520 { 00:16:33.520 "trtype": "VFIOUSER", 00:16:33.520 "adrfam": "IPv4", 00:16:33.520 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:33.520 "trsvcid": "0" 00:16:33.520 } 00:16:33.520 ], 00:16:33.520 "allow_any_host": true, 00:16:33.520 "hosts": [], 00:16:33.520 "serial_number": "SPDK2", 00:16:33.520 "model_number": "SPDK bdev Controller", 00:16:33.520 "max_namespaces": 32, 00:16:33.520 "min_cntlid": 1, 00:16:33.520 "max_cntlid": 65519, 00:16:33.520 "namespaces": [ 00:16:33.520 { 00:16:33.520 "nsid": 1, 00:16:33.520 "bdev_name": "Malloc2", 00:16:33.520 "name": "Malloc2", 00:16:33.520 "nguid": "512079B105694936A03A5B89032A5F40", 00:16:33.520 "uuid": "512079b1-0569-4936-a03a-5b89032a5f40" 00:16:33.520 } 00:16:33.520 ] 00:16:33.520 } 00:16:33.520 ] 00:16:33.520 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 100804 00:16:33.520 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.520 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:33.520 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:33.520 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:33.520 [2024-12-10 05:41:51.344874] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:16:33.520 [2024-12-10 05:41:51.344918] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101028 ] 00:16:33.520 [2024-12-10 05:41:51.382700] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:33.520 [2024-12-10 05:41:51.391461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:33.520 [2024-12-10 05:41:51.391486] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0e20917000 00:16:33.520 [2024-12-10 05:41:51.392457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.393462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.394466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.395474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.396477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.397484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.398496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.399502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.520 [2024-12-10 05:41:51.400505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:33.520 [2024-12-10 05:41:51.400515] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0e2090c000 00:16:33.520 [2024-12-10 05:41:51.401428] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:33.520 [2024-12-10 05:41:51.414782] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:33.520 [2024-12-10 05:41:51.414808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:33.520 [2024-12-10 05:41:51.416858] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:33.520 [2024-12-10 05:41:51.416895] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:33.520 [2024-12-10 05:41:51.416965] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:33.520 [2024-12-10 05:41:51.416979] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:33.520 [2024-12-10 05:41:51.416984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:33.520 [2024-12-10 05:41:51.417862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:33.520 [2024-12-10 05:41:51.417871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:33.520 [2024-12-10 05:41:51.417880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:33.520 [2024-12-10 05:41:51.418861] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:33.520 [2024-12-10 05:41:51.418869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:33.520 [2024-12-10 05:41:51.418876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:33.520 [2024-12-10 05:41:51.419867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:33.520 [2024-12-10 05:41:51.419876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:33.520 [2024-12-10 05:41:51.420873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:33.520 [2024-12-10 05:41:51.420881] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:33.520 [2024-12-10 05:41:51.420886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:33.520 [2024-12-10 05:41:51.420892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:33.520 [2024-12-10 05:41:51.420999] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:33.520 [2024-12-10 05:41:51.421004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:33.520 [2024-12-10 05:41:51.421008] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:33.520 [2024-12-10 05:41:51.421888] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:33.520 [2024-12-10 05:41:51.422898] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:33.520 [2024-12-10 05:41:51.423908] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:33.520 [2024-12-10 05:41:51.424908] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.520 [2024-12-10 05:41:51.424945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:33.520 [2024-12-10 05:41:51.425921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:33.520 [2024-12-10 05:41:51.425931] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:33.520 [2024-12-10 05:41:51.425937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:33.520 [2024-12-10 05:41:51.425953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:33.520 [2024-12-10 05:41:51.425964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:33.520 [2024-12-10 05:41:51.425974] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:33.520 [2024-12-10 05:41:51.425980] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.520 [2024-12-10 05:41:51.425983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.520 [2024-12-10 05:41:51.425994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.520 [2024-12-10 05:41:51.432224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:33.520 [2024-12-10 05:41:51.432235] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:33.520 [2024-12-10 05:41:51.432239] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:33.520 [2024-12-10 05:41:51.432243] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:33.520 [2024-12-10 05:41:51.432247] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:33.520 [2024-12-10 05:41:51.432251] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:33.520 [2024-12-10 05:41:51.432255] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:33.520 [2024-12-10 05:41:51.432260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.432268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.432279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:33.521 [2024-12-10 05:41:51.440223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:33.521 [2024-12-10 05:41:51.440234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.521 [2024-12-10 05:41:51.440242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.521 [2024-12-10 05:41:51.440249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.521 [2024-12-10 05:41:51.440256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.521 [2024-12-10 05:41:51.440261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.440269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.440277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:33.521 [2024-12-10 05:41:51.448222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:33.521 [2024-12-10 05:41:51.448229] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:33.521 [2024-12-10 05:41:51.448233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.448239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.448248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.448256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:33.521 [2024-12-10 05:41:51.456221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:33.521 [2024-12-10 05:41:51.456274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.456284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.456291] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:33.521 [2024-12-10 05:41:51.456295] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:33.521 [2024-12-10 05:41:51.456298] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.521 [2024-12-10 05:41:51.456304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:33.521 [2024-12-10 05:41:51.464223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:33.521 [2024-12-10 05:41:51.464233] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:33.521 [2024-12-10 05:41:51.464240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.464247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.464253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:33.521 [2024-12-10 05:41:51.464257] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.521 [2024-12-10 05:41:51.464260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.521 [2024-12-10 05:41:51.464266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.521 [2024-12-10 05:41:51.472223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:33.521 [2024-12-10 05:41:51.472235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.472242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:33.521 [2024-12-10 05:41:51.472249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:33.521 [2024-12-10 05:41:51.472253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.521 [2024-12-10 05:41:51.472256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.521 [2024-12-10 05:41:51.472261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.780 [2024-12-10 05:41:51.480222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:33.780 [2024-12-10 05:41:51.480231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480263] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:33.781 [2024-12-10 05:41:51.480267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:33.781 [2024-12-10 05:41:51.480272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:33.781 [2024-12-10 05:41:51.480287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.488223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.488235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.496221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.496232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.504221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.504233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.512222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.512237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:33.781 [2024-12-10 05:41:51.512242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:33.781 [2024-12-10 05:41:51.512245] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:33.781 [2024-12-10 05:41:51.512248] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:33.781 [2024-12-10 05:41:51.512250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:33.781 [2024-12-10 05:41:51.512256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:33.781 [2024-12-10 05:41:51.512263] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:33.781 [2024-12-10 05:41:51.512266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:33.781 [2024-12-10 05:41:51.512269] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.781 [2024-12-10 05:41:51.512275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.512281] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:33.781 [2024-12-10 05:41:51.512286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.781 [2024-12-10 05:41:51.512290] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.781 [2024-12-10 05:41:51.512295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.512301] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:33.781 [2024-12-10 05:41:51.512305] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:33.781 [2024-12-10 05:41:51.512308] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.781 [2024-12-10 05:41:51.512313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:33.781 [2024-12-10 05:41:51.520222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.520235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.520244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:33.781 [2024-12-10 05:41:51.520250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:33.781 ===================================================== 00:16:33.781 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:33.781 ===================================================== 00:16:33.781 Controller Capabilities/Features 00:16:33.781 ================================ 00:16:33.781 Vendor ID: 4e58 00:16:33.781 Subsystem Vendor ID: 4e58 00:16:33.781 Serial Number: SPDK2 00:16:33.781 Model Number: SPDK bdev Controller 00:16:33.781 Firmware Version: 25.01 00:16:33.781 Recommended Arb Burst: 6 00:16:33.781 IEEE OUI Identifier: 8d 6b 50 00:16:33.781 Multi-path I/O 00:16:33.781 May have multiple subsystem ports: Yes 00:16:33.781 May have multiple controllers: Yes 00:16:33.781 Associated with SR-IOV VF: No 00:16:33.781 Max Data Transfer Size: 131072 00:16:33.781 Max Number of Namespaces: 32 00:16:33.781 Max Number of I/O Queues: 127 00:16:33.781 NVMe Specification Version (VS): 1.3 00:16:33.781 NVMe Specification Version (Identify): 1.3 00:16:33.781 Maximum Queue Entries: 256 00:16:33.781 Contiguous Queues Required: Yes 00:16:33.781 Arbitration Mechanisms Supported 00:16:33.781 Weighted Round Robin: Not Supported 00:16:33.781 Vendor Specific: Not Supported 00:16:33.781 Reset Timeout: 15000 ms 00:16:33.781 Doorbell Stride: 4 bytes 00:16:33.781 NVM Subsystem Reset: Not Supported 00:16:33.781 Command Sets Supported 00:16:33.781 NVM Command Set: Supported 00:16:33.781 Boot Partition: Not Supported 00:16:33.781 Memory Page Size Minimum: 4096 bytes 00:16:33.781 Memory Page Size Maximum: 4096 bytes 00:16:33.781 Persistent Memory Region: Not Supported 00:16:33.781 Optional Asynchronous Events Supported 00:16:33.781 Namespace Attribute Notices: Supported 00:16:33.781 Firmware Activation Notices: Not Supported 00:16:33.781 ANA Change Notices: Not Supported 00:16:33.781 PLE Aggregate Log Change Notices: Not Supported 00:16:33.781 LBA Status Info Alert Notices: Not Supported 00:16:33.781 EGE Aggregate Log Change Notices: Not Supported 00:16:33.781 Normal NVM Subsystem Shutdown event: Not Supported 00:16:33.781 Zone Descriptor Change Notices: Not Supported 00:16:33.781 Discovery Log Change Notices: Not Supported 00:16:33.781 Controller Attributes 00:16:33.781 128-bit Host Identifier: Supported 00:16:33.781 Non-Operational Permissive Mode: Not Supported 00:16:33.781 NVM Sets: Not Supported 00:16:33.781 Read Recovery Levels: Not Supported 00:16:33.781 Endurance Groups: Not Supported 00:16:33.781 Predictable Latency Mode: Not Supported 00:16:33.781 Traffic Based Keep ALive: Not Supported 00:16:33.781 Namespace Granularity: Not Supported 00:16:33.781 SQ Associations: Not Supported 00:16:33.781 UUID List: Not Supported 00:16:33.781 Multi-Domain Subsystem: Not Supported 00:16:33.781 Fixed Capacity Management: Not Supported 00:16:33.781 Variable Capacity Management: Not Supported 00:16:33.781 Delete Endurance Group: Not Supported 00:16:33.781 Delete NVM Set: Not Supported 00:16:33.781 Extended LBA Formats Supported: Not Supported 00:16:33.781 Flexible Data Placement Supported: Not Supported 00:16:33.781 00:16:33.781 Controller Memory Buffer Support 00:16:33.781 ================================ 00:16:33.781 Supported: No 00:16:33.781 00:16:33.781 Persistent Memory Region Support 00:16:33.781 ================================ 00:16:33.781 Supported: No 00:16:33.781 00:16:33.781 Admin Command Set Attributes 00:16:33.781 ============================ 00:16:33.781 Security Send/Receive: Not Supported 00:16:33.781 Format NVM: Not Supported 00:16:33.781 Firmware Activate/Download: Not Supported 00:16:33.781 Namespace Management: Not Supported 00:16:33.781 Device Self-Test: Not Supported 00:16:33.781 Directives: Not Supported 00:16:33.781 NVMe-MI: Not Supported 00:16:33.781 Virtualization Management: Not Supported 00:16:33.781 Doorbell Buffer Config: Not Supported 00:16:33.781 Get LBA Status Capability: Not Supported 00:16:33.781 Command & Feature Lockdown Capability: Not Supported 00:16:33.781 Abort Command Limit: 4 00:16:33.781 Async Event Request Limit: 4 00:16:33.781 Number of Firmware Slots: N/A 00:16:33.781 Firmware Slot 1 Read-Only: N/A 00:16:33.781 Firmware Activation Without Reset: N/A 00:16:33.781 Multiple Update Detection Support: N/A 00:16:33.781 Firmware Update Granularity: No Information Provided 00:16:33.781 Per-Namespace SMART Log: No 00:16:33.781 Asymmetric Namespace Access Log Page: Not Supported 00:16:33.781 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:33.781 Command Effects Log Page: Supported 00:16:33.781 Get Log Page Extended Data: Supported 00:16:33.781 Telemetry Log Pages: Not Supported 00:16:33.781 Persistent Event Log Pages: Not Supported 00:16:33.781 Supported Log Pages Log Page: May Support 00:16:33.781 Commands Supported & Effects Log Page: Not Supported 00:16:33.781 Feature Identifiers & Effects Log Page:May Support 00:16:33.781 NVMe-MI Commands & Effects Log Page: May Support 00:16:33.781 Data Area 4 for Telemetry Log: Not Supported 00:16:33.782 Error Log Page Entries Supported: 128 00:16:33.782 Keep Alive: Supported 00:16:33.782 Keep Alive Granularity: 10000 ms 00:16:33.782 00:16:33.782 NVM Command Set Attributes 00:16:33.782 ========================== 00:16:33.782 Submission Queue Entry Size 00:16:33.782 Max: 64 00:16:33.782 Min: 64 00:16:33.782 Completion Queue Entry Size 00:16:33.782 Max: 16 00:16:33.782 Min: 16 00:16:33.782 Number of Namespaces: 32 00:16:33.782 Compare Command: Supported 00:16:33.782 Write Uncorrectable Command: Not Supported 00:16:33.782 Dataset Management Command: Supported 00:16:33.782 Write Zeroes Command: Supported 00:16:33.782 Set Features Save Field: Not Supported 00:16:33.782 Reservations: Not Supported 00:16:33.782 Timestamp: Not Supported 00:16:33.782 Copy: Supported 00:16:33.782 Volatile Write Cache: Present 00:16:33.782 Atomic Write Unit (Normal): 1 00:16:33.782 Atomic Write Unit (PFail): 1 00:16:33.782 Atomic Compare & Write Unit: 1 00:16:33.782 Fused Compare & Write: Supported 00:16:33.782 Scatter-Gather List 00:16:33.782 SGL Command Set: Supported (Dword aligned) 00:16:33.782 SGL Keyed: Not Supported 00:16:33.782 SGL Bit Bucket Descriptor: Not Supported 00:16:33.782 SGL Metadata Pointer: Not Supported 00:16:33.782 Oversized SGL: Not Supported 00:16:33.782 SGL Metadata Address: Not Supported 00:16:33.782 SGL Offset: Not Supported 00:16:33.782 Transport SGL Data Block: Not Supported 00:16:33.782 Replay Protected Memory Block: Not Supported 00:16:33.782 00:16:33.782 Firmware Slot Information 00:16:33.782 ========================= 00:16:33.782 Active slot: 1 00:16:33.782 Slot 1 Firmware Revision: 25.01 00:16:33.782 00:16:33.782 00:16:33.782 Commands Supported and Effects 00:16:33.782 ============================== 00:16:33.782 Admin Commands 00:16:33.782 -------------- 00:16:33.782 Get Log Page (02h): Supported 00:16:33.782 Identify (06h): Supported 00:16:33.782 Abort (08h): Supported 00:16:33.782 Set Features (09h): Supported 00:16:33.782 Get Features (0Ah): Supported 00:16:33.782 Asynchronous Event Request (0Ch): Supported 00:16:33.782 Keep Alive (18h): Supported 00:16:33.782 I/O Commands 00:16:33.782 ------------ 00:16:33.782 Flush (00h): Supported LBA-Change 00:16:33.782 Write (01h): Supported LBA-Change 00:16:33.782 Read (02h): Supported 00:16:33.782 Compare (05h): Supported 00:16:33.782 Write Zeroes (08h): Supported LBA-Change 00:16:33.782 Dataset Management (09h): Supported LBA-Change 00:16:33.782 Copy (19h): Supported LBA-Change 00:16:33.782 00:16:33.782 Error Log 00:16:33.782 ========= 00:16:33.782 00:16:33.782 Arbitration 00:16:33.782 =========== 00:16:33.782 Arbitration Burst: 1 00:16:33.782 00:16:33.782 Power Management 00:16:33.782 ================ 00:16:33.782 Number of Power States: 1 00:16:33.782 Current Power State: Power State #0 00:16:33.782 Power State #0: 00:16:33.782 Max Power: 0.00 W 00:16:33.782 Non-Operational State: Operational 00:16:33.782 Entry Latency: Not Reported 00:16:33.782 Exit Latency: Not Reported 00:16:33.782 Relative Read Throughput: 0 00:16:33.782 Relative Read Latency: 0 00:16:33.782 Relative Write Throughput: 0 00:16:33.782 Relative Write Latency: 0 00:16:33.782 Idle Power: Not Reported 00:16:33.782 Active Power: Not Reported 00:16:33.782 Non-Operational Permissive Mode: Not Supported 00:16:33.782 00:16:33.782 Health Information 00:16:33.782 ================== 00:16:33.782 Critical Warnings: 00:16:33.782 Available Spare Space: OK 00:16:33.782 Temperature: OK 00:16:33.782 Device Reliability: OK 00:16:33.782 Read Only: No 00:16:33.782 Volatile Memory Backup: OK 00:16:33.782 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:33.782 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:33.782 Available Spare: 0% 00:16:33.782 Available Sp[2024-12-10 05:41:51.520334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:33.782 [2024-12-10 05:41:51.528222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:33.782 [2024-12-10 05:41:51.528249] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:33.782 [2024-12-10 05:41:51.528258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.782 [2024-12-10 05:41:51.528263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.782 [2024-12-10 05:41:51.528269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.782 [2024-12-10 05:41:51.528274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.782 [2024-12-10 05:41:51.528323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:33.782 [2024-12-10 05:41:51.528333] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:33.782 [2024-12-10 05:41:51.529326] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.782 [2024-12-10 05:41:51.529368] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:33.782 [2024-12-10 05:41:51.529374] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:33.782 [2024-12-10 05:41:51.530333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:33.782 [2024-12-10 05:41:51.530344] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:33.782 [2024-12-10 05:41:51.530393] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:33.782 [2024-12-10 05:41:51.533223] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:33.782 are Threshold: 0% 00:16:33.782 Life Percentage Used: 0% 00:16:33.782 Data Units Read: 0 00:16:33.782 Data Units Written: 0 00:16:33.782 Host Read Commands: 0 00:16:33.782 Host Write Commands: 0 00:16:33.782 Controller Busy Time: 0 minutes 00:16:33.782 Power Cycles: 0 00:16:33.782 Power On Hours: 0 hours 00:16:33.782 Unsafe Shutdowns: 0 00:16:33.782 Unrecoverable Media Errors: 0 00:16:33.782 Lifetime Error Log Entries: 0 00:16:33.782 Warning Temperature Time: 0 minutes 00:16:33.782 Critical Temperature Time: 0 minutes 00:16:33.782 00:16:33.782 Number of Queues 00:16:33.782 ================ 00:16:33.782 Number of I/O Submission Queues: 127 00:16:33.782 Number of I/O Completion Queues: 127 00:16:33.782 00:16:33.782 Active Namespaces 00:16:33.782 ================= 00:16:33.782 Namespace ID:1 00:16:33.782 Error Recovery Timeout: Unlimited 00:16:33.782 Command Set Identifier: NVM (00h) 00:16:33.782 Deallocate: Supported 00:16:33.782 Deallocated/Unwritten Error: Not Supported 00:16:33.782 Deallocated Read Value: Unknown 00:16:33.782 Deallocate in Write Zeroes: Not Supported 00:16:33.782 Deallocated Guard Field: 0xFFFF 00:16:33.782 Flush: Supported 00:16:33.782 Reservation: Supported 00:16:33.782 Namespace Sharing Capabilities: Multiple Controllers 00:16:33.782 Size (in LBAs): 131072 (0GiB) 00:16:33.782 Capacity (in LBAs): 131072 (0GiB) 00:16:33.782 Utilization (in LBAs): 131072 (0GiB) 00:16:33.782 NGUID: 512079B105694936A03A5B89032A5F40 00:16:33.782 UUID: 512079b1-0569-4936-a03a-5b89032a5f40 00:16:33.782 Thin Provisioning: Not Supported 00:16:33.782 Per-NS Atomic Units: Yes 00:16:33.782 Atomic Boundary Size (Normal): 0 00:16:33.782 Atomic Boundary Size (PFail): 0 00:16:33.782 Atomic Boundary Offset: 0 00:16:33.782 Maximum Single Source Range Length: 65535 00:16:33.782 Maximum Copy Length: 65535 00:16:33.782 Maximum Source Range Count: 1 00:16:33.782 NGUID/EUI64 Never Reused: No 00:16:33.782 Namespace Write Protected: No 00:16:33.782 Number of LBA Formats: 1 00:16:33.782 Current LBA Format: LBA Format #00 00:16:33.782 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:33.782 00:16:33.782 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:34.048 [2024-12-10 05:41:51.761468] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:39.319 Initializing NVMe Controllers 00:16:39.319 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:39.319 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:39.319 Initialization complete. Launching workers. 00:16:39.320 ======================================================== 00:16:39.320 Latency(us) 00:16:39.320 Device Information : IOPS MiB/s Average min max 00:16:39.320 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39990.16 156.21 3201.15 973.80 9407.47 00:16:39.320 ======================================================== 00:16:39.320 Total : 39990.16 156.21 3201.15 973.80 9407.47 00:16:39.320 00:16:39.320 [2024-12-10 05:41:56.866474] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:39.320 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:39.320 [2024-12-10 05:41:57.105194] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:44.589 Initializing NVMe Controllers 00:16:44.589 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:44.589 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:44.589 Initialization complete. Launching workers. 00:16:44.589 ======================================================== 00:16:44.589 Latency(us) 00:16:44.589 Device Information : IOPS MiB/s Average min max 00:16:44.589 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39959.60 156.09 3203.24 960.88 7610.04 00:16:44.590 ======================================================== 00:16:44.590 Total : 39959.60 156.09 3203.24 960.88 7610.04 00:16:44.590 00:16:44.590 [2024-12-10 05:42:02.125366] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:44.590 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:44.590 [2024-12-10 05:42:02.339610] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.853 [2024-12-10 05:42:07.471310] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.853 Initializing NVMe Controllers 00:16:49.853 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:49.853 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:49.854 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:49.854 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:49.854 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:49.854 Initialization complete. Launching workers. 00:16:49.854 Starting thread on core 2 00:16:49.854 Starting thread on core 3 00:16:49.854 Starting thread on core 1 00:16:49.854 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:49.854 [2024-12-10 05:42:07.777256] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:53.238 [2024-12-10 05:42:10.833555] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:53.238 Initializing NVMe Controllers 00:16:53.238 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.238 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:53.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:53.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:53.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:53.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:53.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:53.238 Initialization complete. Launching workers. 00:16:53.238 Starting thread on core 1 with urgent priority queue 00:16:53.238 Starting thread on core 2 with urgent priority queue 00:16:53.238 Starting thread on core 3 with urgent priority queue 00:16:53.238 Starting thread on core 0 with urgent priority queue 00:16:53.238 SPDK bdev Controller (SPDK2 ) core 0: 8501.33 IO/s 11.76 secs/100000 ios 00:16:53.238 SPDK bdev Controller (SPDK2 ) core 1: 9242.33 IO/s 10.82 secs/100000 ios 00:16:53.238 SPDK bdev Controller (SPDK2 ) core 2: 9272.33 IO/s 10.78 secs/100000 ios 00:16:53.238 SPDK bdev Controller (SPDK2 ) core 3: 10160.67 IO/s 9.84 secs/100000 ios 00:16:53.238 ======================================================== 00:16:53.238 00:16:53.238 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:53.238 [2024-12-10 05:42:11.126656] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:53.238 Initializing NVMe Controllers 00:16:53.238 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.238 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.238 Namespace ID: 1 size: 0GB 00:16:53.238 Initialization complete. 00:16:53.238 INFO: using host memory buffer for IO 00:16:53.238 Hello world! 00:16:53.238 [2024-12-10 05:42:11.135716] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:53.238 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:53.496 [2024-12-10 05:42:11.425574] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.865 Initializing NVMe Controllers 00:16:54.865 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.865 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.865 Initialization complete. Launching workers. 00:16:54.865 submit (in ns) avg, min, max = 6333.5, 3128.6, 4002621.9 00:16:54.865 complete (in ns) avg, min, max = 23023.6, 1725.7, 4202486.7 00:16:54.865 00:16:54.865 Submit histogram 00:16:54.865 ================ 00:16:54.865 Range in us Cumulative Count 00:16:54.865 3.124 - 3.139: 0.0062% ( 1) 00:16:54.865 3.139 - 3.154: 0.0247% ( 3) 00:16:54.865 3.154 - 3.170: 0.0494% ( 4) 00:16:54.865 3.170 - 3.185: 0.0864% ( 6) 00:16:54.865 3.185 - 3.200: 0.3888% ( 49) 00:16:54.865 3.200 - 3.215: 2.1602% ( 287) 00:16:54.865 3.215 - 3.230: 6.5177% ( 706) 00:16:54.865 3.230 - 3.246: 11.7825% ( 853) 00:16:54.865 3.246 - 3.261: 17.3806% ( 907) 00:16:54.865 3.261 - 3.276: 24.5525% ( 1162) 00:16:54.865 3.276 - 3.291: 30.7863% ( 1010) 00:16:54.865 3.291 - 3.307: 36.8041% ( 975) 00:16:54.865 3.307 - 3.322: 42.7787% ( 968) 00:16:54.865 3.322 - 3.337: 48.0990% ( 862) 00:16:54.865 3.337 - 3.352: 53.1971% ( 826) 00:16:54.865 3.352 - 3.368: 58.8878% ( 922) 00:16:54.865 3.368 - 3.383: 66.1832% ( 1182) 00:16:54.865 3.383 - 3.398: 71.8985% ( 926) 00:16:54.865 3.398 - 3.413: 77.9533% ( 981) 00:16:54.865 3.413 - 3.429: 81.5393% ( 581) 00:16:54.865 3.429 - 3.444: 84.3044% ( 448) 00:16:54.866 3.444 - 3.459: 86.1067% ( 292) 00:16:54.866 3.459 - 3.474: 86.9461% ( 136) 00:16:54.866 3.474 - 3.490: 87.4522% ( 82) 00:16:54.866 3.490 - 3.505: 87.8348% ( 62) 00:16:54.866 3.505 - 3.520: 88.4088% ( 93) 00:16:54.866 3.520 - 3.535: 89.1310% ( 117) 00:16:54.866 3.535 - 3.550: 90.0074% ( 142) 00:16:54.866 3.550 - 3.566: 90.9209% ( 148) 00:16:54.866 3.566 - 3.581: 91.7418% ( 133) 00:16:54.866 3.581 - 3.596: 92.6182% ( 142) 00:16:54.866 3.596 - 3.611: 93.4267% ( 131) 00:16:54.866 3.611 - 3.627: 94.3772% ( 154) 00:16:54.866 3.627 - 3.642: 95.4080% ( 167) 00:16:54.866 3.642 - 3.657: 96.4202% ( 164) 00:16:54.866 3.657 - 3.672: 97.1732% ( 122) 00:16:54.866 3.672 - 3.688: 97.7534% ( 94) 00:16:54.866 3.688 - 3.703: 98.1669% ( 67) 00:16:54.866 3.703 - 3.718: 98.4693% ( 49) 00:16:54.866 3.718 - 3.733: 98.7532% ( 46) 00:16:54.866 3.733 - 3.749: 98.9507% ( 32) 00:16:54.866 3.749 - 3.764: 99.1236% ( 28) 00:16:54.866 3.764 - 3.779: 99.2285% ( 17) 00:16:54.866 3.779 - 3.794: 99.2964% ( 11) 00:16:54.866 3.794 - 3.810: 99.3458% ( 8) 00:16:54.866 3.810 - 3.825: 99.4075% ( 10) 00:16:54.866 3.825 - 3.840: 99.4198% ( 2) 00:16:54.866 3.840 - 3.855: 99.4322% ( 2) 00:16:54.866 3.855 - 3.870: 99.4569% ( 4) 00:16:54.866 3.870 - 3.886: 99.4754% ( 3) 00:16:54.866 3.886 - 3.901: 99.4877% ( 2) 00:16:54.866 3.901 - 3.931: 99.5186% ( 5) 00:16:54.866 3.931 - 3.962: 99.5371% ( 3) 00:16:54.866 3.962 - 3.992: 99.5494% ( 2) 00:16:54.866 3.992 - 4.023: 99.5680% ( 3) 00:16:54.866 4.023 - 4.053: 99.5803% ( 2) 00:16:54.866 4.053 - 4.084: 99.5926% ( 2) 00:16:54.866 4.114 - 4.145: 99.5988% ( 1) 00:16:54.866 4.145 - 4.175: 99.6050% ( 1) 00:16:54.866 4.236 - 4.267: 99.6112% ( 1) 00:16:54.866 4.297 - 4.328: 99.6173% ( 1) 00:16:54.866 4.328 - 4.358: 99.6235% ( 1) 00:16:54.866 4.389 - 4.419: 99.6297% ( 1) 00:16:54.866 4.663 - 4.693: 99.6358% ( 1) 00:16:54.866 5.272 - 5.303: 99.6482% ( 2) 00:16:54.866 5.303 - 5.333: 99.6544% ( 1) 00:16:54.866 5.364 - 5.394: 99.6605% ( 1) 00:16:54.866 5.394 - 5.425: 99.6667% ( 1) 00:16:54.866 5.455 - 5.486: 99.6791% ( 2) 00:16:54.866 5.730 - 5.760: 99.6852% ( 1) 00:16:54.866 5.760 - 5.790: 99.6914% ( 1) 00:16:54.866 5.943 - 5.973: 99.6976% ( 1) 00:16:54.866 6.217 - 6.248: 99.7037% ( 1) 00:16:54.866 6.248 - 6.278: 99.7099% ( 1) 00:16:54.866 6.370 - 6.400: 99.7161% ( 1) 00:16:54.866 6.674 - 6.705: 99.7223% ( 1) 00:16:54.866 6.766 - 6.796: 99.7284% ( 1) 00:16:54.866 6.827 - 6.857: 99.7408% ( 2) 00:16:54.866 6.888 - 6.918: 99.7469% ( 1) 00:16:54.866 6.979 - 7.010: 99.7531% ( 1) 00:16:54.866 7.070 - 7.101: 99.7593% ( 1) 00:16:54.866 7.192 - 7.223: 99.7655% ( 1) 00:16:54.866 7.253 - 7.284: 99.7716% ( 1) 00:16:54.866 7.284 - 7.314: 99.7778% ( 1) 00:16:54.866 7.345 - 7.375: 99.7901% ( 2) 00:16:54.866 7.375 - 7.406: 99.8087% ( 3) 00:16:54.866 7.558 - 7.589: 99.8148% ( 1) 00:16:54.866 7.589 - 7.619: 99.8210% ( 1) 00:16:54.866 7.802 - 7.863: 99.8272% ( 1) 00:16:54.866 7.863 - 7.924: 99.8395% ( 2) 00:16:54.866 7.924 - 7.985: 99.8457% ( 1) 00:16:54.866 7.985 - 8.046: 99.8519% ( 1) 00:16:54.866 8.046 - 8.107: 99.8580% ( 1) 00:16:54.866 8.107 - 8.168: 99.8642% ( 1) 00:16:54.866 8.229 - 8.290: 99.8704% ( 1) 00:16:54.866 8.533 - 8.594: 99.8766% ( 1) 00:16:54.866 8.716 - 8.777: 99.8827% ( 1) 00:16:54.866 8.777 - 8.838: 99.8889% ( 1) 00:16:54.866 8.899 - 8.960: 99.8951% ( 1) 00:16:54.866 9.021 - 9.082: 99.9012% ( 1) 00:16:54.866 9.387 - 9.448: 99.9074% ( 1) 00:16:54.866 9.570 - 9.630: 99.9136% ( 1) 00:16:54.866 9.996 - 10.057: 99.9198% ( 1) 00:16:54.866 10.179 - 10.240: 99.9259% ( 1) 00:16:54.866 3994.575 - 4025.783: 100.0000% ( 12) 00:16:54.866 00:16:54.866 Complete histogram 00:16:54.866 ================== 00:16:54.866 Range in us Cumulative Count 00:16:54.866 1.722 - 1.730: 0.0062% ( 1) 00:16:54.866 1.730 - 1.737: 0.0432% ( 6) 00:16:54.866 1.737 - 1.745: 0.0679% ( 4) 00:16:54.866 1.745 - 1.752: 0.0802% ( 2) 00:16:54.866 1.752 - 1.760: 0.0988% ( 3) 00:16:54.866 1.760 - 1.768: 0.7160% ( 100) 00:16:54.866 1.768 - 1.775: 6.7214% ( 973) 00:16:54.866 1.775 - 1.783: 23.4724% ( 2714) 00:16:54.866 1.783 - 1.790: 40.6184% ( 2778) 00:16:54.866 1.790 - 1.798: 48.8520% ( 1334) 00:16:54.866 1.798 - 1.806: 52.1294% ( 531) 00:16:54.866 1.806 - 1.813: 54.3760% ( 364) 00:16:54.866 1.813 - 1.821: 59.0915% ( 764) 00:16:54.866 1.821 - 1.829: 71.0530% ( 1938) 00:16:54.866 1.829 - 1.836: 84.1501% ( 2122) 00:16:54.866 1.836 - 1.844: 91.2789% ( 1155) 00:16:54.866 1.844 - 1.851: 94.3587% ( 499) 00:16:54.866 1.851 - 1.859: 95.9943% ( 265) 00:16:54.866 1.859 - 1.867: 96.9016% ( 147) 00:16:54.866 1.867 - 1.874: 97.3707% ( 76) 00:16:54.866 1.874 - 1.882: 97.6670% ( 48) 00:16:54.866 1.882 - 1.890: 97.8953% ( 37) 00:16:54.866 1.890 - 1.897: 98.0928% ( 32) 00:16:54.866 1.897 - 1.905: 98.3521% ( 42) 00:16:54.866 1.905 - 1.912: 98.5557% ( 33) 00:16:54.866 1.912 - 1.920: 98.7100% ( 25) 00:16:54.866 1.920 - 1.928: 98.7964% ( 14) 00:16:54.866 1.928 - 1.935: 98.8643% ( 11) 00:16:54.866 1.935 - 1.943: 98.9014% ( 6) 00:16:54.866 1.943 - 1.950: 98.9137% ( 2) 00:16:54.866 1.950 - 1.966: 98.9261% ( 2) 00:16:54.866 1.966 - 1.981: 98.9446% ( 3) 00:16:54.866 1.981 - 1.996: 98.9754% ( 5) 00:16:54.866 1.996 - 2.011: 98.9816% ( 1) 00:16:54.866 2.011 - 2.027: 99.0248% ( 7) 00:16:54.866 2.027 - 2.042: 99.0557% ( 5) 00:16:54.866 2.042 - 2.057: 99.0804% ( 4) 00:16:54.866 2.057 - 2.072: 99.0927% ( 2) 00:16:54.867 2.072 - 2.088: 99.0989% ( 1) 00:16:54.867 2.088 - 2.103: 99.1174% ( 3) 00:16:54.867 2.103 - 2.118: 99.1483% ( 5) 00:16:54.867 2.118 - 2.133: 99.1668% ( 3) 00:16:54.867 2.133 - 2.149: 99.1729% ( 1) 00:16:54.867 2.149 - 2.164: 99.1853% ( 2) 00:16:54.867 2.164 - 2.179: 99.1915% ( 1) 00:16:54.867 2.179 - 2.194: 99.2038% ( 2) 00:16:54.867 2.194 - 2.210: 99.2100% ( 1) 00:16:54.867 2.210 - 2.225: 99.2223% ( 2) 00:16:54.867 2.255 - 2.270: 99.2285% ( 1) 00:16:54.867 2.270 - 2.286: 99.2470% ( 3) 00:16:54.867 2.286 - 2.301: 99.2532% ( 1) 00:16:54.867 2.316 - 2.331: 99.2594% ( 1) 00:16:54.867 2.331 - 2.347: 99.2840% ( 4) 00:16:54.867 2.362 - 2.377: 99.2902% ( 1) 00:16:54.867 2.392 - 2.408: 99.3026% ( 2) 00:16:54.867 2.408 - 2.423: 99.3087% ( 1) 00:16:54.867 2.438 - 2.453: 99.3149% ( 1) 00:16:54.867 2.606 - 2.621: 99.3211% ( 1) 00:16:54.867 3.703 - 3.718: 99.3272% ( 1) 00:16:54.867 4.724 - 4.7[2024-12-10 05:42:12.519220] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:54.867 54: 99.3396% ( 2) 00:16:54.867 4.876 - 4.907: 99.3458% ( 1) 00:16:54.867 4.937 - 4.968: 99.3519% ( 1) 00:16:54.867 5.181 - 5.211: 99.3581% ( 1) 00:16:54.867 5.272 - 5.303: 99.3643% ( 1) 00:16:54.867 5.425 - 5.455: 99.3704% ( 1) 00:16:54.867 5.547 - 5.577: 99.3766% ( 1) 00:16:54.867 5.669 - 5.699: 99.3828% ( 1) 00:16:54.867 5.821 - 5.851: 99.3890% ( 1) 00:16:54.867 5.912 - 5.943: 99.3951% ( 1) 00:16:54.867 6.430 - 6.461: 99.4013% ( 1) 00:16:54.867 6.705 - 6.735: 99.4075% ( 1) 00:16:54.867 6.827 - 6.857: 99.4137% ( 1) 00:16:54.867 6.888 - 6.918: 99.4198% ( 1) 00:16:54.867 7.162 - 7.192: 99.4260% ( 1) 00:16:54.867 7.345 - 7.375: 99.4322% ( 1) 00:16:54.867 7.710 - 7.741: 99.4383% ( 1) 00:16:54.867 7.985 - 8.046: 99.4507% ( 2) 00:16:54.867 9.691 - 9.752: 99.4569% ( 1) 00:16:54.867 14.324 - 14.385: 99.4630% ( 1) 00:16:54.867 784.091 - 787.992: 99.4692% ( 1) 00:16:54.867 2995.931 - 3011.535: 99.4754% ( 1) 00:16:54.867 3994.575 - 4025.783: 99.9938% ( 84) 00:16:54.867 4181.821 - 4213.029: 100.0000% ( 1) 00:16:54.867 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:54.867 [ 00:16:54.867 { 00:16:54.867 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:54.867 "subtype": "Discovery", 00:16:54.867 "listen_addresses": [], 00:16:54.867 "allow_any_host": true, 00:16:54.867 "hosts": [] 00:16:54.867 }, 00:16:54.867 { 00:16:54.867 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:54.867 "subtype": "NVMe", 00:16:54.867 "listen_addresses": [ 00:16:54.867 { 00:16:54.867 "trtype": "VFIOUSER", 00:16:54.867 "adrfam": "IPv4", 00:16:54.867 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:54.867 "trsvcid": "0" 00:16:54.867 } 00:16:54.867 ], 00:16:54.867 "allow_any_host": true, 00:16:54.867 "hosts": [], 00:16:54.867 "serial_number": "SPDK1", 00:16:54.867 "model_number": "SPDK bdev Controller", 00:16:54.867 "max_namespaces": 32, 00:16:54.867 "min_cntlid": 1, 00:16:54.867 "max_cntlid": 65519, 00:16:54.867 "namespaces": [ 00:16:54.867 { 00:16:54.867 "nsid": 1, 00:16:54.867 "bdev_name": "Malloc1", 00:16:54.867 "name": "Malloc1", 00:16:54.867 "nguid": "845AF6A01B5C4017974099D7F6BF0D2E", 00:16:54.867 "uuid": "845af6a0-1b5c-4017-9740-99d7f6bf0d2e" 00:16:54.867 }, 00:16:54.867 { 00:16:54.867 "nsid": 2, 00:16:54.867 "bdev_name": "Malloc3", 00:16:54.867 "name": "Malloc3", 00:16:54.867 "nguid": "720D2B97B7BF47EBA8C1E405C6637903", 00:16:54.867 "uuid": "720d2b97-b7bf-47eb-a8c1-e405c6637903" 00:16:54.867 } 00:16:54.867 ] 00:16:54.867 }, 00:16:54.867 { 00:16:54.867 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:54.867 "subtype": "NVMe", 00:16:54.867 "listen_addresses": [ 00:16:54.867 { 00:16:54.867 "trtype": "VFIOUSER", 00:16:54.867 "adrfam": "IPv4", 00:16:54.867 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:54.867 "trsvcid": "0" 00:16:54.867 } 00:16:54.867 ], 00:16:54.867 "allow_any_host": true, 00:16:54.867 "hosts": [], 00:16:54.867 "serial_number": "SPDK2", 00:16:54.867 "model_number": "SPDK bdev Controller", 00:16:54.867 "max_namespaces": 32, 00:16:54.867 "min_cntlid": 1, 00:16:54.867 "max_cntlid": 65519, 00:16:54.867 "namespaces": [ 00:16:54.867 { 00:16:54.867 "nsid": 1, 00:16:54.867 "bdev_name": "Malloc2", 00:16:54.867 "name": "Malloc2", 00:16:54.867 "nguid": "512079B105694936A03A5B89032A5F40", 00:16:54.867 "uuid": "512079b1-0569-4936-a03a-5b89032a5f40" 00:16:54.867 } 00:16:54.867 ] 00:16:54.867 } 00:16:54.867 ] 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=104949 00:16:54.867 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:54.868 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:54.868 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:54.868 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:54.868 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:54.868 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:54.868 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:55.124 [2024-12-10 05:42:12.921662] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.124 Malloc4 00:16:55.124 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:55.381 [2024-12-10 05:42:13.158440] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.381 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:55.381 Asynchronous Event Request test 00:16:55.381 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.381 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.381 Registering asynchronous event callbacks... 00:16:55.381 Starting namespace attribute notice tests for all controllers... 00:16:55.381 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:55.381 aer_cb - Changed Namespace 00:16:55.381 Cleaning up... 00:16:55.638 [ 00:16:55.638 { 00:16:55.638 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:55.638 "subtype": "Discovery", 00:16:55.638 "listen_addresses": [], 00:16:55.638 "allow_any_host": true, 00:16:55.638 "hosts": [] 00:16:55.638 }, 00:16:55.638 { 00:16:55.638 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:55.638 "subtype": "NVMe", 00:16:55.638 "listen_addresses": [ 00:16:55.638 { 00:16:55.638 "trtype": "VFIOUSER", 00:16:55.638 "adrfam": "IPv4", 00:16:55.638 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:55.639 "trsvcid": "0" 00:16:55.639 } 00:16:55.639 ], 00:16:55.639 "allow_any_host": true, 00:16:55.639 "hosts": [], 00:16:55.639 "serial_number": "SPDK1", 00:16:55.639 "model_number": "SPDK bdev Controller", 00:16:55.639 "max_namespaces": 32, 00:16:55.639 "min_cntlid": 1, 00:16:55.639 "max_cntlid": 65519, 00:16:55.639 "namespaces": [ 00:16:55.639 { 00:16:55.639 "nsid": 1, 00:16:55.639 "bdev_name": "Malloc1", 00:16:55.639 "name": "Malloc1", 00:16:55.639 "nguid": "845AF6A01B5C4017974099D7F6BF0D2E", 00:16:55.639 "uuid": "845af6a0-1b5c-4017-9740-99d7f6bf0d2e" 00:16:55.639 }, 00:16:55.639 { 00:16:55.639 "nsid": 2, 00:16:55.639 "bdev_name": "Malloc3", 00:16:55.639 "name": "Malloc3", 00:16:55.639 "nguid": "720D2B97B7BF47EBA8C1E405C6637903", 00:16:55.639 "uuid": "720d2b97-b7bf-47eb-a8c1-e405c6637903" 00:16:55.639 } 00:16:55.639 ] 00:16:55.639 }, 00:16:55.639 { 00:16:55.639 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:55.639 "subtype": "NVMe", 00:16:55.639 "listen_addresses": [ 00:16:55.639 { 00:16:55.639 "trtype": "VFIOUSER", 00:16:55.639 "adrfam": "IPv4", 00:16:55.639 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:55.639 "trsvcid": "0" 00:16:55.639 } 00:16:55.639 ], 00:16:55.639 "allow_any_host": true, 00:16:55.639 "hosts": [], 00:16:55.639 "serial_number": "SPDK2", 00:16:55.639 "model_number": "SPDK bdev Controller", 00:16:55.639 "max_namespaces": 32, 00:16:55.639 "min_cntlid": 1, 00:16:55.639 "max_cntlid": 65519, 00:16:55.639 "namespaces": [ 00:16:55.639 { 00:16:55.639 "nsid": 1, 00:16:55.639 "bdev_name": "Malloc2", 00:16:55.639 "name": "Malloc2", 00:16:55.639 "nguid": "512079B105694936A03A5B89032A5F40", 00:16:55.639 "uuid": "512079b1-0569-4936-a03a-5b89032a5f40" 00:16:55.639 }, 00:16:55.639 { 00:16:55.639 "nsid": 2, 00:16:55.639 "bdev_name": "Malloc4", 00:16:55.639 "name": "Malloc4", 00:16:55.639 "nguid": "C70CAE6A0DF3474E817DDCB0954F9F26", 00:16:55.639 "uuid": "c70cae6a-0df3-474e-817d-dcb0954f9f26" 00:16:55.639 } 00:16:55.639 ] 00:16:55.639 } 00:16:55.639 ] 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 104949 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 96705 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 96705 ']' 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 96705 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96705 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96705' 00:16:55.639 killing process with pid 96705 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 96705 00:16:55.639 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 96705 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=105181 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 105181' 00:16:55.897 Process pid: 105181 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 105181 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 105181 ']' 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.897 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:55.897 [2024-12-10 05:42:13.732925] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:55.897 [2024-12-10 05:42:13.733755] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:16:55.897 [2024-12-10 05:42:13.733791] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.897 [2024-12-10 05:42:13.812193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.156 [2024-12-10 05:42:13.852870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.156 [2024-12-10 05:42:13.852906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.156 [2024-12-10 05:42:13.852913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.156 [2024-12-10 05:42:13.852919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.156 [2024-12-10 05:42:13.852924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.156 [2024-12-10 05:42:13.854372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.156 [2024-12-10 05:42:13.854406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.156 [2024-12-10 05:42:13.854513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.156 [2024-12-10 05:42:13.854514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.156 [2024-12-10 05:42:13.922466] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:56.156 [2024-12-10 05:42:13.923298] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:56.156 [2024-12-10 05:42:13.923512] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:56.156 [2024-12-10 05:42:13.923932] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:56.156 [2024-12-10 05:42:13.923978] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:56.156 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.156 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:56.156 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:57.094 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:57.352 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:57.352 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:57.352 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:57.352 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:57.352 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:57.611 Malloc1 00:16:57.611 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:57.868 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:57.868 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:58.126 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:58.126 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:58.126 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:58.385 Malloc2 00:16:58.385 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:58.643 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 105181 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 105181 ']' 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 105181 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105181 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105181' 00:16:58.901 killing process with pid 105181 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 105181 00:16:58.901 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 105181 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:59.160 00:16:59.160 real 0m51.500s 00:16:59.160 user 3m19.538s 00:16:59.160 sys 0m3.263s 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:59.160 ************************************ 00:16:59.160 END TEST nvmf_vfio_user 00:16:59.160 ************************************ 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.160 ************************************ 00:16:59.160 START TEST nvmf_vfio_user_nvme_compliance 00:16:59.160 ************************************ 00:16:59.160 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:59.420 * Looking for test storage... 00:16:59.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.420 --rc genhtml_branch_coverage=1 00:16:59.420 --rc genhtml_function_coverage=1 00:16:59.420 --rc genhtml_legend=1 00:16:59.420 --rc geninfo_all_blocks=1 00:16:59.420 --rc geninfo_unexecuted_blocks=1 00:16:59.420 00:16:59.420 ' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.420 --rc genhtml_branch_coverage=1 00:16:59.420 --rc genhtml_function_coverage=1 00:16:59.420 --rc genhtml_legend=1 00:16:59.420 --rc geninfo_all_blocks=1 00:16:59.420 --rc geninfo_unexecuted_blocks=1 00:16:59.420 00:16:59.420 ' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.420 --rc genhtml_branch_coverage=1 00:16:59.420 --rc genhtml_function_coverage=1 00:16:59.420 --rc genhtml_legend=1 00:16:59.420 --rc geninfo_all_blocks=1 00:16:59.420 --rc geninfo_unexecuted_blocks=1 00:16:59.420 00:16:59.420 ' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.420 --rc genhtml_branch_coverage=1 00:16:59.420 --rc genhtml_function_coverage=1 00:16:59.420 --rc genhtml_legend=1 00:16:59.420 --rc geninfo_all_blocks=1 00:16:59.420 --rc geninfo_unexecuted_blocks=1 00:16:59.420 00:16:59.420 ' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.420 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=105740 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 105740' 00:16:59.421 Process pid: 105740 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 105740 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 105740 ']' 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.421 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.421 [2024-12-10 05:42:17.368208] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:16:59.421 [2024-12-10 05:42:17.368268] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.678 [2024-12-10 05:42:17.448289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.678 [2024-12-10 05:42:17.488548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.678 [2024-12-10 05:42:17.488583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.678 [2024-12-10 05:42:17.488590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.678 [2024-12-10 05:42:17.488596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.678 [2024-12-10 05:42:17.488602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.678 [2024-12-10 05:42:17.489897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.678 [2024-12-10 05:42:17.490009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.678 [2024-12-10 05:42:17.490010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.678 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.678 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:59.678 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 malloc0 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:01.050 00:17:01.050 00:17:01.050 CUnit - A unit testing framework for C - Version 2.1-3 00:17:01.050 http://cunit.sourceforge.net/ 00:17:01.050 00:17:01.050 00:17:01.050 Suite: nvme_compliance 00:17:01.051 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 05:42:18.820624] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.051 [2024-12-10 05:42:18.821963] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:01.051 [2024-12-10 05:42:18.821978] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:01.051 [2024-12-10 05:42:18.821986] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:01.051 [2024-12-10 05:42:18.823650] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.051 passed 00:17:01.051 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 05:42:18.903210] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.051 [2024-12-10 05:42:18.906228] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.051 passed 00:17:01.051 Test: admin_identify_ns ...[2024-12-10 05:42:18.985540] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.308 [2024-12-10 05:42:19.046233] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:01.308 [2024-12-10 05:42:19.054238] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:01.308 [2024-12-10 05:42:19.075315] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.309 passed 00:17:01.309 Test: admin_get_features_mandatory_features ...[2024-12-10 05:42:19.150336] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.309 [2024-12-10 05:42:19.153362] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.309 passed 00:17:01.309 Test: admin_get_features_optional_features ...[2024-12-10 05:42:19.230848] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.309 [2024-12-10 05:42:19.233869] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.566 passed 00:17:01.566 Test: admin_set_features_number_of_queues ...[2024-12-10 05:42:19.310642] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.566 [2024-12-10 05:42:19.419323] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.566 passed 00:17:01.566 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 05:42:19.494185] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.566 [2024-12-10 05:42:19.497203] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.824 passed 00:17:01.824 Test: admin_get_log_page_with_lpo ...[2024-12-10 05:42:19.573139] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.824 [2024-12-10 05:42:19.642227] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:01.824 [2024-12-10 05:42:19.655312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.824 passed 00:17:01.824 Test: fabric_property_get ...[2024-12-10 05:42:19.729003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.824 [2024-12-10 05:42:19.730235] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:01.824 [2024-12-10 05:42:19.732026] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.824 passed 00:17:02.081 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 05:42:19.808529] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.081 [2024-12-10 05:42:19.809763] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:02.081 [2024-12-10 05:42:19.811555] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.081 passed 00:17:02.081 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 05:42:19.889332] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.081 [2024-12-10 05:42:19.972227] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:02.081 [2024-12-10 05:42:19.988235] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:02.081 [2024-12-10 05:42:19.993298] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.081 passed 00:17:02.339 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 05:42:20.072343] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.339 [2024-12-10 05:42:20.073580] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:02.339 [2024-12-10 05:42:20.075369] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.339 passed 00:17:02.339 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 05:42:20.149158] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.339 [2024-12-10 05:42:20.226237] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:02.339 [2024-12-10 05:42:20.250232] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:02.339 [2024-12-10 05:42:20.255365] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.339 passed 00:17:02.597 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 05:42:20.332259] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.597 [2024-12-10 05:42:20.333498] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:02.597 [2024-12-10 05:42:20.333522] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:02.597 [2024-12-10 05:42:20.335282] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.597 passed 00:17:02.597 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 05:42:20.414611] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.597 [2024-12-10 05:42:20.507236] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:02.597 [2024-12-10 05:42:20.515228] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:02.597 [2024-12-10 05:42:20.523223] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:02.597 [2024-12-10 05:42:20.531224] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:02.854 [2024-12-10 05:42:20.563319] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.854 passed 00:17:02.854 Test: admin_create_io_sq_verify_pc ...[2024-12-10 05:42:20.640056] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.855 [2024-12-10 05:42:20.656233] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:02.855 [2024-12-10 05:42:20.674244] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.855 passed 00:17:02.855 Test: admin_create_io_qp_max_qps ...[2024-12-10 05:42:20.750775] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.227 [2024-12-10 05:42:21.846226] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:04.485 [2024-12-10 05:42:22.224584] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.485 passed 00:17:04.485 Test: admin_create_io_sq_shared_cq ...[2024-12-10 05:42:22.300548] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.485 [2024-12-10 05:42:22.432232] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:04.743 [2024-12-10 05:42:22.469297] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.743 passed 00:17:04.743 00:17:04.743 Run Summary: Type Total Ran Passed Failed Inactive 00:17:04.743 suites 1 1 n/a 0 0 00:17:04.743 tests 18 18 18 0 0 00:17:04.743 asserts 360 360 360 0 n/a 00:17:04.743 00:17:04.743 Elapsed time = 1.497 seconds 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 105740 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 105740 ']' 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 105740 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105740 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105740' 00:17:04.743 killing process with pid 105740 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 105740 00:17:04.743 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 105740 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:05.001 00:17:05.001 real 0m5.634s 00:17:05.001 user 0m15.716s 00:17:05.001 sys 0m0.510s 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:05.001 ************************************ 00:17:05.001 END TEST nvmf_vfio_user_nvme_compliance 00:17:05.001 ************************************ 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.001 ************************************ 00:17:05.001 START TEST nvmf_vfio_user_fuzz 00:17:05.001 ************************************ 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:05.001 * Looking for test storage... 00:17:05.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:17:05.001 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:05.260 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:05.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.261 --rc genhtml_branch_coverage=1 00:17:05.261 --rc genhtml_function_coverage=1 00:17:05.261 --rc genhtml_legend=1 00:17:05.261 --rc geninfo_all_blocks=1 00:17:05.261 --rc geninfo_unexecuted_blocks=1 00:17:05.261 00:17:05.261 ' 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:05.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.261 --rc genhtml_branch_coverage=1 00:17:05.261 --rc genhtml_function_coverage=1 00:17:05.261 --rc genhtml_legend=1 00:17:05.261 --rc geninfo_all_blocks=1 00:17:05.261 --rc geninfo_unexecuted_blocks=1 00:17:05.261 00:17:05.261 ' 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:05.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.261 --rc genhtml_branch_coverage=1 00:17:05.261 --rc genhtml_function_coverage=1 00:17:05.261 --rc genhtml_legend=1 00:17:05.261 --rc geninfo_all_blocks=1 00:17:05.261 --rc geninfo_unexecuted_blocks=1 00:17:05.261 00:17:05.261 ' 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:05.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.261 --rc genhtml_branch_coverage=1 00:17:05.261 --rc genhtml_function_coverage=1 00:17:05.261 --rc genhtml_legend=1 00:17:05.261 --rc geninfo_all_blocks=1 00:17:05.261 --rc geninfo_unexecuted_blocks=1 00:17:05.261 00:17:05.261 ' 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.261 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=106801 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 106801' 00:17:05.261 Process pid: 106801 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 106801 00:17:05.261 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 106801 ']' 00:17:05.262 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.262 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.262 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.262 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.262 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:05.520 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.520 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:05.520 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:06.452 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.453 malloc0 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:06.453 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:38.518 Fuzzing completed. Shutting down the fuzz application 00:17:38.518 00:17:38.518 Dumping successful admin opcodes: 00:17:38.518 9, 10, 00:17:38.518 Dumping successful io opcodes: 00:17:38.518 0, 00:17:38.518 NS: 0x20000081ef00 I/O qp, Total commands completed: 1017891, total successful commands: 3999, random_seed: 3177512192 00:17:38.518 NS: 0x20000081ef00 admin qp, Total commands completed: 249598, total successful commands: 58, random_seed: 3632179264 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 106801 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 106801 ']' 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 106801 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106801 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106801' 00:17:38.518 killing process with pid 106801 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 106801 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 106801 00:17:38.518 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:38.518 00:17:38.518 real 0m32.257s 00:17:38.518 user 0m29.835s 00:17:38.518 sys 0m31.316s 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 ************************************ 00:17:38.518 END TEST nvmf_vfio_user_fuzz 00:17:38.518 ************************************ 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 ************************************ 00:17:38.518 START TEST nvmf_auth_target 00:17:38.518 ************************************ 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:38.518 * Looking for test storage... 00:17:38.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.518 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.519 --rc genhtml_branch_coverage=1 00:17:38.519 --rc genhtml_function_coverage=1 00:17:38.519 --rc genhtml_legend=1 00:17:38.519 --rc geninfo_all_blocks=1 00:17:38.519 --rc geninfo_unexecuted_blocks=1 00:17:38.519 00:17:38.519 ' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.519 --rc genhtml_branch_coverage=1 00:17:38.519 --rc genhtml_function_coverage=1 00:17:38.519 --rc genhtml_legend=1 00:17:38.519 --rc geninfo_all_blocks=1 00:17:38.519 --rc geninfo_unexecuted_blocks=1 00:17:38.519 00:17:38.519 ' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.519 --rc genhtml_branch_coverage=1 00:17:38.519 --rc genhtml_function_coverage=1 00:17:38.519 --rc genhtml_legend=1 00:17:38.519 --rc geninfo_all_blocks=1 00:17:38.519 --rc geninfo_unexecuted_blocks=1 00:17:38.519 00:17:38.519 ' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.519 --rc genhtml_branch_coverage=1 00:17:38.519 --rc genhtml_function_coverage=1 00:17:38.519 --rc genhtml_legend=1 00:17:38.519 --rc geninfo_all_blocks=1 00:17:38.519 --rc geninfo_unexecuted_blocks=1 00:17:38.519 00:17:38.519 ' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:38.519 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:45.088 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:45.089 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:45.089 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:45.089 Found net devices under 0000:af:00.0: cvl_0_0 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:45.089 Found net devices under 0000:af:00.1: cvl_0_1 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:45.089 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:45.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:17:45.089 00:17:45.089 --- 10.0.0.2 ping statistics --- 00:17:45.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.089 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:45.089 00:17:45.089 --- 10.0.0.1 ping statistics --- 00:17:45.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.089 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=115559 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 115559 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 115559 ']' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.089 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.089 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.089 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:45.089 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.089 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.089 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=115648 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74b62aa4166430608517f771c346ccb3c3829641743ab6c9 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Jj6 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74b62aa4166430608517f771c346ccb3c3829641743ab6c9 0 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74b62aa4166430608517f771c346ccb3c3829641743ab6c9 0 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74b62aa4166430608517f771c346ccb3c3829641743ab6c9 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Jj6 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Jj6 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Jj6 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=881209379f47aacd2f5617da0aebf455d2ce0e0e840a50f651cf5548e529d868 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7vF 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 881209379f47aacd2f5617da0aebf455d2ce0e0e840a50f651cf5548e529d868 3 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 881209379f47aacd2f5617da0aebf455d2ce0e0e840a50f651cf5548e529d868 3 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=881209379f47aacd2f5617da0aebf455d2ce0e0e840a50f651cf5548e529d868 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7vF 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7vF 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7vF 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=286b3918c4ca5558b02c8a52791469ee 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kAz 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 286b3918c4ca5558b02c8a52791469ee 1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 286b3918c4ca5558b02c8a52791469ee 1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=286b3918c4ca5558b02c8a52791469ee 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kAz 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kAz 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kAz 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b03458a36d189f5fef127e26c879bdc532861bbafd5f485 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.r5n 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b03458a36d189f5fef127e26c879bdc532861bbafd5f485 2 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b03458a36d189f5fef127e26c879bdc532861bbafd5f485 2 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b03458a36d189f5fef127e26c879bdc532861bbafd5f485 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.r5n 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.r5n 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.r5n 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:45.350 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8e8778ef9e42671756f316f46ccdf65a5931f1e6f0ef5485 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.m2b 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8e8778ef9e42671756f316f46ccdf65a5931f1e6f0ef5485 2 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8e8778ef9e42671756f316f46ccdf65a5931f1e6f0ef5485 2 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8e8778ef9e42671756f316f46ccdf65a5931f1e6f0ef5485 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.m2b 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.m2b 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.m2b 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fff36b46dd44be6b0ac83b67f8bae444 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tH1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fff36b46dd44be6b0ac83b67f8bae444 1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fff36b46dd44be6b0ac83b67f8bae444 1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fff36b46dd44be6b0ac83b67f8bae444 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tH1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tH1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.tH1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5ddc61a9bad506bdba9223ad3fa965230e0a7ab8c381ed4a59e6b26b8183c09a 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.plO 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5ddc61a9bad506bdba9223ad3fa965230e0a7ab8c381ed4a59e6b26b8183c09a 3 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5ddc61a9bad506bdba9223ad3fa965230e0a7ab8c381ed4a59e6b26b8183c09a 3 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5ddc61a9bad506bdba9223ad3fa965230e0a7ab8c381ed4a59e6b26b8183c09a 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.plO 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.plO 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.plO 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 115559 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 115559 ']' 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.610 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.868 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.868 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:45.868 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 115648 /var/tmp/host.sock 00:17:45.869 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 115648 ']' 00:17:45.869 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:45.869 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.869 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:45.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:45.869 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.869 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jj6 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.126 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Jj6 00:17:46.127 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Jj6 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7vF ]] 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vF 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vF 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vF 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kAz 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kAz 00:17:46.385 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kAz 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.r5n ]] 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5n 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5n 00:17:46.643 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5n 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.m2b 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.m2b 00:17:46.903 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.m2b 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.tH1 ]] 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tH1 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tH1 00:17:47.161 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tH1 00:17:47.418 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.418 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.plO 00:17:47.418 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.418 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.plO 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.plO 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:47.419 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.677 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.936 00:17:47.936 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.936 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.936 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.194 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.194 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.195 { 00:17:48.195 "cntlid": 1, 00:17:48.195 "qid": 0, 00:17:48.195 "state": "enabled", 00:17:48.195 "thread": "nvmf_tgt_poll_group_000", 00:17:48.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:48.195 "listen_address": { 00:17:48.195 "trtype": "TCP", 00:17:48.195 "adrfam": "IPv4", 00:17:48.195 "traddr": "10.0.0.2", 00:17:48.195 "trsvcid": "4420" 00:17:48.195 }, 00:17:48.195 "peer_address": { 00:17:48.195 "trtype": "TCP", 00:17:48.195 "adrfam": "IPv4", 00:17:48.195 "traddr": "10.0.0.1", 00:17:48.195 "trsvcid": "39682" 00:17:48.195 }, 00:17:48.195 "auth": { 00:17:48.195 "state": "completed", 00:17:48.195 "digest": "sha256", 00:17:48.195 "dhgroup": "null" 00:17:48.195 } 00:17:48.195 } 00:17:48.195 ]' 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.195 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.195 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.195 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.195 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.195 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.195 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.453 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:17:48.453 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.020 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.279 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.538 00:17:49.538 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.538 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.538 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.797 { 00:17:49.797 "cntlid": 3, 00:17:49.797 "qid": 0, 00:17:49.797 "state": "enabled", 00:17:49.797 "thread": "nvmf_tgt_poll_group_000", 00:17:49.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:49.797 "listen_address": { 00:17:49.797 "trtype": "TCP", 00:17:49.797 "adrfam": "IPv4", 00:17:49.797 "traddr": "10.0.0.2", 00:17:49.797 "trsvcid": "4420" 00:17:49.797 }, 00:17:49.797 "peer_address": { 00:17:49.797 "trtype": "TCP", 00:17:49.797 "adrfam": "IPv4", 00:17:49.797 "traddr": "10.0.0.1", 00:17:49.797 "trsvcid": "39700" 00:17:49.797 }, 00:17:49.797 "auth": { 00:17:49.797 "state": "completed", 00:17:49.797 "digest": "sha256", 00:17:49.797 "dhgroup": "null" 00:17:49.797 } 00:17:49.797 } 00:17:49.797 ]' 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.797 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.058 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:17:50.058 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.664 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.932 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.932 00:17:51.191 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.191 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.191 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.191 { 00:17:51.191 "cntlid": 5, 00:17:51.191 "qid": 0, 00:17:51.191 "state": "enabled", 00:17:51.191 "thread": "nvmf_tgt_poll_group_000", 00:17:51.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:51.191 "listen_address": { 00:17:51.191 "trtype": "TCP", 00:17:51.191 "adrfam": "IPv4", 00:17:51.191 "traddr": "10.0.0.2", 00:17:51.191 "trsvcid": "4420" 00:17:51.191 }, 00:17:51.191 "peer_address": { 00:17:51.191 "trtype": "TCP", 00:17:51.191 "adrfam": "IPv4", 00:17:51.191 "traddr": "10.0.0.1", 00:17:51.191 "trsvcid": "39730" 00:17:51.191 }, 00:17:51.191 "auth": { 00:17:51.191 "state": "completed", 00:17:51.191 "digest": "sha256", 00:17:51.191 "dhgroup": "null" 00:17:51.191 } 00:17:51.191 } 00:17:51.191 ]' 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.191 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.449 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.449 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.449 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.449 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.449 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.449 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.707 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:17:51.707 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:17:52.272 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.272 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:52.272 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.272 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.272 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.273 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.531 00:17:52.531 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.531 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.531 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.789 { 00:17:52.789 "cntlid": 7, 00:17:52.789 "qid": 0, 00:17:52.789 "state": "enabled", 00:17:52.789 "thread": "nvmf_tgt_poll_group_000", 00:17:52.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:52.789 "listen_address": { 00:17:52.789 "trtype": "TCP", 00:17:52.789 "adrfam": "IPv4", 00:17:52.789 "traddr": "10.0.0.2", 00:17:52.789 "trsvcid": "4420" 00:17:52.789 }, 00:17:52.789 "peer_address": { 00:17:52.789 "trtype": "TCP", 00:17:52.789 "adrfam": "IPv4", 00:17:52.789 "traddr": "10.0.0.1", 00:17:52.789 "trsvcid": "39756" 00:17:52.789 }, 00:17:52.789 "auth": { 00:17:52.789 "state": "completed", 00:17:52.789 "digest": "sha256", 00:17:52.789 "dhgroup": "null" 00:17:52.789 } 00:17:52.789 } 00:17:52.789 ]' 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.789 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.047 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.047 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.047 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.047 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.047 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.304 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:17:53.304 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.870 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.871 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.129 00:17:54.129 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.129 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.129 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.387 { 00:17:54.387 "cntlid": 9, 00:17:54.387 "qid": 0, 00:17:54.387 "state": "enabled", 00:17:54.387 "thread": "nvmf_tgt_poll_group_000", 00:17:54.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:54.387 "listen_address": { 00:17:54.387 "trtype": "TCP", 00:17:54.387 "adrfam": "IPv4", 00:17:54.387 "traddr": "10.0.0.2", 00:17:54.387 "trsvcid": "4420" 00:17:54.387 }, 00:17:54.387 "peer_address": { 00:17:54.387 "trtype": "TCP", 00:17:54.387 "adrfam": "IPv4", 00:17:54.387 "traddr": "10.0.0.1", 00:17:54.387 "trsvcid": "39778" 00:17:54.387 }, 00:17:54.387 "auth": { 00:17:54.387 "state": "completed", 00:17:54.387 "digest": "sha256", 00:17:54.387 "dhgroup": "ffdhe2048" 00:17:54.387 } 00:17:54.387 } 00:17:54.387 ]' 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.387 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.645 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.645 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.645 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.645 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:17:54.645 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:17:55.210 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.210 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:55.211 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.211 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.211 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.211 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.211 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.211 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.469 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.727 00:17:55.727 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.727 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.727 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.985 { 00:17:55.985 "cntlid": 11, 00:17:55.985 "qid": 0, 00:17:55.985 "state": "enabled", 00:17:55.985 "thread": "nvmf_tgt_poll_group_000", 00:17:55.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:55.985 "listen_address": { 00:17:55.985 "trtype": "TCP", 00:17:55.985 "adrfam": "IPv4", 00:17:55.985 "traddr": "10.0.0.2", 00:17:55.985 "trsvcid": "4420" 00:17:55.985 }, 00:17:55.985 "peer_address": { 00:17:55.985 "trtype": "TCP", 00:17:55.985 "adrfam": "IPv4", 00:17:55.985 "traddr": "10.0.0.1", 00:17:55.985 "trsvcid": "39806" 00:17:55.985 }, 00:17:55.985 "auth": { 00:17:55.985 "state": "completed", 00:17:55.985 "digest": "sha256", 00:17:55.985 "dhgroup": "ffdhe2048" 00:17:55.985 } 00:17:55.985 } 00:17:55.985 ]' 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.985 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.243 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:17:56.243 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.810 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.069 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.328 00:17:57.328 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.328 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.328 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.586 { 00:17:57.586 "cntlid": 13, 00:17:57.586 "qid": 0, 00:17:57.586 "state": "enabled", 00:17:57.586 "thread": "nvmf_tgt_poll_group_000", 00:17:57.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:57.586 "listen_address": { 00:17:57.586 "trtype": "TCP", 00:17:57.586 "adrfam": "IPv4", 00:17:57.586 "traddr": "10.0.0.2", 00:17:57.586 "trsvcid": "4420" 00:17:57.586 }, 00:17:57.586 "peer_address": { 00:17:57.586 "trtype": "TCP", 00:17:57.586 "adrfam": "IPv4", 00:17:57.586 "traddr": "10.0.0.1", 00:17:57.586 "trsvcid": "57312" 00:17:57.586 }, 00:17:57.586 "auth": { 00:17:57.586 "state": "completed", 00:17:57.586 "digest": "sha256", 00:17:57.586 "dhgroup": "ffdhe2048" 00:17:57.586 } 00:17:57.586 } 00:17:57.586 ]' 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.586 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.587 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.845 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:17:57.845 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.411 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.669 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.927 00:17:58.927 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.927 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.927 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.185 { 00:17:59.185 "cntlid": 15, 00:17:59.185 "qid": 0, 00:17:59.185 "state": "enabled", 00:17:59.185 "thread": "nvmf_tgt_poll_group_000", 00:17:59.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:59.185 "listen_address": { 00:17:59.185 "trtype": "TCP", 00:17:59.185 "adrfam": "IPv4", 00:17:59.185 "traddr": "10.0.0.2", 00:17:59.185 "trsvcid": "4420" 00:17:59.185 }, 00:17:59.185 "peer_address": { 00:17:59.185 "trtype": "TCP", 00:17:59.185 "adrfam": "IPv4", 00:17:59.185 "traddr": "10.0.0.1", 00:17:59.185 "trsvcid": "57340" 00:17:59.185 }, 00:17:59.185 "auth": { 00:17:59.185 "state": "completed", 00:17:59.185 "digest": "sha256", 00:17:59.185 "dhgroup": "ffdhe2048" 00:17:59.185 } 00:17:59.185 } 00:17:59.185 ]' 00:17:59.185 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.185 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.444 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:17:59.444 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:00.010 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.268 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.526 00:18:00.526 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.526 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.526 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.784 { 00:18:00.784 "cntlid": 17, 00:18:00.784 "qid": 0, 00:18:00.784 "state": "enabled", 00:18:00.784 "thread": "nvmf_tgt_poll_group_000", 00:18:00.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:00.784 "listen_address": { 00:18:00.784 "trtype": "TCP", 00:18:00.784 "adrfam": "IPv4", 00:18:00.784 "traddr": "10.0.0.2", 00:18:00.784 "trsvcid": "4420" 00:18:00.784 }, 00:18:00.784 "peer_address": { 00:18:00.784 "trtype": "TCP", 00:18:00.784 "adrfam": "IPv4", 00:18:00.784 "traddr": "10.0.0.1", 00:18:00.784 "trsvcid": "57370" 00:18:00.784 }, 00:18:00.784 "auth": { 00:18:00.784 "state": "completed", 00:18:00.784 "digest": "sha256", 00:18:00.784 "dhgroup": "ffdhe3072" 00:18:00.784 } 00:18:00.784 } 00:18:00.784 ]' 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.784 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.042 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:01.042 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.608 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.866 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.124 00:18:02.124 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.124 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.124 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.382 { 00:18:02.382 "cntlid": 19, 00:18:02.382 "qid": 0, 00:18:02.382 "state": "enabled", 00:18:02.382 "thread": "nvmf_tgt_poll_group_000", 00:18:02.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:02.382 "listen_address": { 00:18:02.382 "trtype": "TCP", 00:18:02.382 "adrfam": "IPv4", 00:18:02.382 "traddr": "10.0.0.2", 00:18:02.382 "trsvcid": "4420" 00:18:02.382 }, 00:18:02.382 "peer_address": { 00:18:02.382 "trtype": "TCP", 00:18:02.382 "adrfam": "IPv4", 00:18:02.382 "traddr": "10.0.0.1", 00:18:02.382 "trsvcid": "57398" 00:18:02.382 }, 00:18:02.382 "auth": { 00:18:02.382 "state": "completed", 00:18:02.382 "digest": "sha256", 00:18:02.382 "dhgroup": "ffdhe3072" 00:18:02.382 } 00:18:02.382 } 00:18:02.382 ]' 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.382 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.640 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:02.640 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:03.205 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.205 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.464 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.722 00:18:03.722 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.722 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.722 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.979 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.979 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.980 { 00:18:03.980 "cntlid": 21, 00:18:03.980 "qid": 0, 00:18:03.980 "state": "enabled", 00:18:03.980 "thread": "nvmf_tgt_poll_group_000", 00:18:03.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:03.980 "listen_address": { 00:18:03.980 "trtype": "TCP", 00:18:03.980 "adrfam": "IPv4", 00:18:03.980 "traddr": "10.0.0.2", 00:18:03.980 "trsvcid": "4420" 00:18:03.980 }, 00:18:03.980 "peer_address": { 00:18:03.980 "trtype": "TCP", 00:18:03.980 "adrfam": "IPv4", 00:18:03.980 "traddr": "10.0.0.1", 00:18:03.980 "trsvcid": "57420" 00:18:03.980 }, 00:18:03.980 "auth": { 00:18:03.980 "state": "completed", 00:18:03.980 "digest": "sha256", 00:18:03.980 "dhgroup": "ffdhe3072" 00:18:03.980 } 00:18:03.980 } 00:18:03.980 ]' 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.980 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.238 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:04.238 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.929 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.187 00:18:05.187 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.187 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.187 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.445 { 00:18:05.445 "cntlid": 23, 00:18:05.445 "qid": 0, 00:18:05.445 "state": "enabled", 00:18:05.445 "thread": "nvmf_tgt_poll_group_000", 00:18:05.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:05.445 "listen_address": { 00:18:05.445 "trtype": "TCP", 00:18:05.445 "adrfam": "IPv4", 00:18:05.445 "traddr": "10.0.0.2", 00:18:05.445 "trsvcid": "4420" 00:18:05.445 }, 00:18:05.445 "peer_address": { 00:18:05.445 "trtype": "TCP", 00:18:05.445 "adrfam": "IPv4", 00:18:05.445 "traddr": "10.0.0.1", 00:18:05.445 "trsvcid": "57464" 00:18:05.445 }, 00:18:05.445 "auth": { 00:18:05.445 "state": "completed", 00:18:05.445 "digest": "sha256", 00:18:05.445 "dhgroup": "ffdhe3072" 00:18:05.445 } 00:18:05.445 } 00:18:05.445 ]' 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.445 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.703 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.703 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.703 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.703 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:05.703 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:06.268 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.526 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.783 00:18:06.783 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.783 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.783 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.041 { 00:18:07.041 "cntlid": 25, 00:18:07.041 "qid": 0, 00:18:07.041 "state": "enabled", 00:18:07.041 "thread": "nvmf_tgt_poll_group_000", 00:18:07.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:07.041 "listen_address": { 00:18:07.041 "trtype": "TCP", 00:18:07.041 "adrfam": "IPv4", 00:18:07.041 "traddr": "10.0.0.2", 00:18:07.041 "trsvcid": "4420" 00:18:07.041 }, 00:18:07.041 "peer_address": { 00:18:07.041 "trtype": "TCP", 00:18:07.041 "adrfam": "IPv4", 00:18:07.041 "traddr": "10.0.0.1", 00:18:07.041 "trsvcid": "40596" 00:18:07.041 }, 00:18:07.041 "auth": { 00:18:07.041 "state": "completed", 00:18:07.041 "digest": "sha256", 00:18:07.041 "dhgroup": "ffdhe4096" 00:18:07.041 } 00:18:07.041 } 00:18:07.041 ]' 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.041 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.297 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.297 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.297 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.297 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.297 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.554 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:07.554 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.120 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.120 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:08.120 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.120 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.120 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.120 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.120 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.121 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.378 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.637 { 00:18:08.637 "cntlid": 27, 00:18:08.637 "qid": 0, 00:18:08.637 "state": "enabled", 00:18:08.637 "thread": "nvmf_tgt_poll_group_000", 00:18:08.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:08.637 "listen_address": { 00:18:08.637 "trtype": "TCP", 00:18:08.637 "adrfam": "IPv4", 00:18:08.637 "traddr": "10.0.0.2", 00:18:08.637 "trsvcid": "4420" 00:18:08.637 }, 00:18:08.637 "peer_address": { 00:18:08.637 "trtype": "TCP", 00:18:08.637 "adrfam": "IPv4", 00:18:08.637 "traddr": "10.0.0.1", 00:18:08.637 "trsvcid": "40630" 00:18:08.637 }, 00:18:08.637 "auth": { 00:18:08.637 "state": "completed", 00:18:08.637 "digest": "sha256", 00:18:08.637 "dhgroup": "ffdhe4096" 00:18:08.637 } 00:18:08.637 } 00:18:08.637 ]' 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.637 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.894 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.894 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.894 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.894 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.894 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.152 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:09.152 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.718 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.976 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.976 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.976 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.976 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.234 00:18:10.235 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.235 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.235 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.235 { 00:18:10.235 "cntlid": 29, 00:18:10.235 "qid": 0, 00:18:10.235 "state": "enabled", 00:18:10.235 "thread": "nvmf_tgt_poll_group_000", 00:18:10.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:10.235 "listen_address": { 00:18:10.235 "trtype": "TCP", 00:18:10.235 "adrfam": "IPv4", 00:18:10.235 "traddr": "10.0.0.2", 00:18:10.235 "trsvcid": "4420" 00:18:10.235 }, 00:18:10.235 "peer_address": { 00:18:10.235 "trtype": "TCP", 00:18:10.235 "adrfam": "IPv4", 00:18:10.235 "traddr": "10.0.0.1", 00:18:10.235 "trsvcid": "40664" 00:18:10.235 }, 00:18:10.235 "auth": { 00:18:10.235 "state": "completed", 00:18:10.235 "digest": "sha256", 00:18:10.235 "dhgroup": "ffdhe4096" 00:18:10.235 } 00:18:10.235 } 00:18:10.235 ]' 00:18:10.235 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.492 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.492 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.492 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.492 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.492 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.493 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.493 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.750 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:10.751 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.316 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.574 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.832 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.832 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.832 { 00:18:11.832 "cntlid": 31, 00:18:11.832 "qid": 0, 00:18:11.832 "state": "enabled", 00:18:11.832 "thread": "nvmf_tgt_poll_group_000", 00:18:11.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:11.832 "listen_address": { 00:18:11.832 "trtype": "TCP", 00:18:11.832 "adrfam": "IPv4", 00:18:11.832 "traddr": "10.0.0.2", 00:18:11.832 "trsvcid": "4420" 00:18:11.832 }, 00:18:11.832 "peer_address": { 00:18:11.832 "trtype": "TCP", 00:18:11.832 "adrfam": "IPv4", 00:18:11.833 "traddr": "10.0.0.1", 00:18:11.833 "trsvcid": "40694" 00:18:11.833 }, 00:18:11.833 "auth": { 00:18:11.833 "state": "completed", 00:18:11.833 "digest": "sha256", 00:18:11.833 "dhgroup": "ffdhe4096" 00:18:11.833 } 00:18:11.833 } 00:18:11.833 ]' 00:18:11.833 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.091 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.349 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:12.349 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.914 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.915 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.172 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.172 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.173 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.173 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.430 00:18:13.430 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.430 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.430 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.689 { 00:18:13.689 "cntlid": 33, 00:18:13.689 "qid": 0, 00:18:13.689 "state": "enabled", 00:18:13.689 "thread": "nvmf_tgt_poll_group_000", 00:18:13.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:13.689 "listen_address": { 00:18:13.689 "trtype": "TCP", 00:18:13.689 "adrfam": "IPv4", 00:18:13.689 "traddr": "10.0.0.2", 00:18:13.689 "trsvcid": "4420" 00:18:13.689 }, 00:18:13.689 "peer_address": { 00:18:13.689 "trtype": "TCP", 00:18:13.689 "adrfam": "IPv4", 00:18:13.689 "traddr": "10.0.0.1", 00:18:13.689 "trsvcid": "40702" 00:18:13.689 }, 00:18:13.689 "auth": { 00:18:13.689 "state": "completed", 00:18:13.689 "digest": "sha256", 00:18:13.689 "dhgroup": "ffdhe6144" 00:18:13.689 } 00:18:13.689 } 00:18:13.689 ]' 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.689 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.947 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:13.947 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:14.513 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.513 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:14.513 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.513 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.513 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.513 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.514 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.514 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.772 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.773 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.773 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.773 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.773 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.031 00:18:15.031 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.031 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.031 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.289 { 00:18:15.289 "cntlid": 35, 00:18:15.289 "qid": 0, 00:18:15.289 "state": "enabled", 00:18:15.289 "thread": "nvmf_tgt_poll_group_000", 00:18:15.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:15.289 "listen_address": { 00:18:15.289 "trtype": "TCP", 00:18:15.289 "adrfam": "IPv4", 00:18:15.289 "traddr": "10.0.0.2", 00:18:15.289 "trsvcid": "4420" 00:18:15.289 }, 00:18:15.289 "peer_address": { 00:18:15.289 "trtype": "TCP", 00:18:15.289 "adrfam": "IPv4", 00:18:15.289 "traddr": "10.0.0.1", 00:18:15.289 "trsvcid": "40732" 00:18:15.289 }, 00:18:15.289 "auth": { 00:18:15.289 "state": "completed", 00:18:15.289 "digest": "sha256", 00:18:15.289 "dhgroup": "ffdhe6144" 00:18:15.289 } 00:18:15.289 } 00:18:15.289 ]' 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.289 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.547 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:15.547 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.113 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.371 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.629 00:18:16.629 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.629 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.629 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.887 { 00:18:16.887 "cntlid": 37, 00:18:16.887 "qid": 0, 00:18:16.887 "state": "enabled", 00:18:16.887 "thread": "nvmf_tgt_poll_group_000", 00:18:16.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:16.887 "listen_address": { 00:18:16.887 "trtype": "TCP", 00:18:16.887 "adrfam": "IPv4", 00:18:16.887 "traddr": "10.0.0.2", 00:18:16.887 "trsvcid": "4420" 00:18:16.887 }, 00:18:16.887 "peer_address": { 00:18:16.887 "trtype": "TCP", 00:18:16.887 "adrfam": "IPv4", 00:18:16.887 "traddr": "10.0.0.1", 00:18:16.887 "trsvcid": "52298" 00:18:16.887 }, 00:18:16.887 "auth": { 00:18:16.887 "state": "completed", 00:18:16.887 "digest": "sha256", 00:18:16.887 "dhgroup": "ffdhe6144" 00:18:16.887 } 00:18:16.887 } 00:18:16.887 ]' 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.887 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.147 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.147 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.147 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.147 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:17.147 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.712 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.970 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.228 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.486 { 00:18:18.486 "cntlid": 39, 00:18:18.486 "qid": 0, 00:18:18.486 "state": "enabled", 00:18:18.486 "thread": "nvmf_tgt_poll_group_000", 00:18:18.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:18.486 "listen_address": { 00:18:18.486 "trtype": "TCP", 00:18:18.486 "adrfam": "IPv4", 00:18:18.486 "traddr": "10.0.0.2", 00:18:18.486 "trsvcid": "4420" 00:18:18.486 }, 00:18:18.486 "peer_address": { 00:18:18.486 "trtype": "TCP", 00:18:18.486 "adrfam": "IPv4", 00:18:18.486 "traddr": "10.0.0.1", 00:18:18.486 "trsvcid": "52330" 00:18:18.486 }, 00:18:18.486 "auth": { 00:18:18.486 "state": "completed", 00:18:18.486 "digest": "sha256", 00:18:18.486 "dhgroup": "ffdhe6144" 00:18:18.486 } 00:18:18.486 } 00:18:18.486 ]' 00:18:18.486 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.743 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.001 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:19.001 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.567 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.825 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.825 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.825 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.825 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.082 00:18:20.082 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.083 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.083 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.341 { 00:18:20.341 "cntlid": 41, 00:18:20.341 "qid": 0, 00:18:20.341 "state": "enabled", 00:18:20.341 "thread": "nvmf_tgt_poll_group_000", 00:18:20.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:20.341 "listen_address": { 00:18:20.341 "trtype": "TCP", 00:18:20.341 "adrfam": "IPv4", 00:18:20.341 "traddr": "10.0.0.2", 00:18:20.341 "trsvcid": "4420" 00:18:20.341 }, 00:18:20.341 "peer_address": { 00:18:20.341 "trtype": "TCP", 00:18:20.341 "adrfam": "IPv4", 00:18:20.341 "traddr": "10.0.0.1", 00:18:20.341 "trsvcid": "52368" 00:18:20.341 }, 00:18:20.341 "auth": { 00:18:20.341 "state": "completed", 00:18:20.341 "digest": "sha256", 00:18:20.341 "dhgroup": "ffdhe8192" 00:18:20.341 } 00:18:20.341 } 00:18:20.341 ]' 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.341 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.598 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.599 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.599 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.599 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.599 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.856 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:20.856 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.422 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.988 00:18:21.988 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.988 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.988 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.245 { 00:18:22.245 "cntlid": 43, 00:18:22.245 "qid": 0, 00:18:22.245 "state": "enabled", 00:18:22.245 "thread": "nvmf_tgt_poll_group_000", 00:18:22.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:22.245 "listen_address": { 00:18:22.245 "trtype": "TCP", 00:18:22.245 "adrfam": "IPv4", 00:18:22.245 "traddr": "10.0.0.2", 00:18:22.245 "trsvcid": "4420" 00:18:22.245 }, 00:18:22.245 "peer_address": { 00:18:22.245 "trtype": "TCP", 00:18:22.245 "adrfam": "IPv4", 00:18:22.245 "traddr": "10.0.0.1", 00:18:22.245 "trsvcid": "52392" 00:18:22.245 }, 00:18:22.245 "auth": { 00:18:22.245 "state": "completed", 00:18:22.245 "digest": "sha256", 00:18:22.245 "dhgroup": "ffdhe8192" 00:18:22.245 } 00:18:22.245 } 00:18:22.245 ]' 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.245 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.246 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.503 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:22.503 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.068 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.325 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.889 00:18:23.889 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.889 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.889 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.147 { 00:18:24.147 "cntlid": 45, 00:18:24.147 "qid": 0, 00:18:24.147 "state": "enabled", 00:18:24.147 "thread": "nvmf_tgt_poll_group_000", 00:18:24.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:24.147 "listen_address": { 00:18:24.147 "trtype": "TCP", 00:18:24.147 "adrfam": "IPv4", 00:18:24.147 "traddr": "10.0.0.2", 00:18:24.147 "trsvcid": "4420" 00:18:24.147 }, 00:18:24.147 "peer_address": { 00:18:24.147 "trtype": "TCP", 00:18:24.147 "adrfam": "IPv4", 00:18:24.147 "traddr": "10.0.0.1", 00:18:24.147 "trsvcid": "52420" 00:18:24.147 }, 00:18:24.147 "auth": { 00:18:24.147 "state": "completed", 00:18:24.147 "digest": "sha256", 00:18:24.147 "dhgroup": "ffdhe8192" 00:18:24.147 } 00:18:24.147 } 00:18:24.147 ]' 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.147 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.405 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:24.405 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.970 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.227 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.793 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.793 { 00:18:25.793 "cntlid": 47, 00:18:25.793 "qid": 0, 00:18:25.793 "state": "enabled", 00:18:25.793 "thread": "nvmf_tgt_poll_group_000", 00:18:25.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:25.793 "listen_address": { 00:18:25.793 "trtype": "TCP", 00:18:25.793 "adrfam": "IPv4", 00:18:25.793 "traddr": "10.0.0.2", 00:18:25.793 "trsvcid": "4420" 00:18:25.793 }, 00:18:25.793 "peer_address": { 00:18:25.793 "trtype": "TCP", 00:18:25.793 "adrfam": "IPv4", 00:18:25.793 "traddr": "10.0.0.1", 00:18:25.793 "trsvcid": "52450" 00:18:25.793 }, 00:18:25.793 "auth": { 00:18:25.793 "state": "completed", 00:18:25.793 "digest": "sha256", 00:18:25.793 "dhgroup": "ffdhe8192" 00:18:25.793 } 00:18:25.793 } 00:18:25.793 ]' 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.793 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.050 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.050 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.050 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.050 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.050 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.308 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:26.308 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.873 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.874 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.131 00:18:27.131 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.131 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.131 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.387 { 00:18:27.387 "cntlid": 49, 00:18:27.387 "qid": 0, 00:18:27.387 "state": "enabled", 00:18:27.387 "thread": "nvmf_tgt_poll_group_000", 00:18:27.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:27.387 "listen_address": { 00:18:27.387 "trtype": "TCP", 00:18:27.387 "adrfam": "IPv4", 00:18:27.387 "traddr": "10.0.0.2", 00:18:27.387 "trsvcid": "4420" 00:18:27.387 }, 00:18:27.387 "peer_address": { 00:18:27.387 "trtype": "TCP", 00:18:27.387 "adrfam": "IPv4", 00:18:27.387 "traddr": "10.0.0.1", 00:18:27.387 "trsvcid": "47658" 00:18:27.387 }, 00:18:27.387 "auth": { 00:18:27.387 "state": "completed", 00:18:27.387 "digest": "sha384", 00:18:27.387 "dhgroup": "null" 00:18:27.387 } 00:18:27.387 } 00:18:27.387 ]' 00:18:27.387 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.644 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.901 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:27.901 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.465 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.466 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.723 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.723 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.981 { 00:18:28.981 "cntlid": 51, 00:18:28.981 "qid": 0, 00:18:28.981 "state": "enabled", 00:18:28.981 "thread": "nvmf_tgt_poll_group_000", 00:18:28.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:28.981 "listen_address": { 00:18:28.981 "trtype": "TCP", 00:18:28.981 "adrfam": "IPv4", 00:18:28.981 "traddr": "10.0.0.2", 00:18:28.981 "trsvcid": "4420" 00:18:28.981 }, 00:18:28.981 "peer_address": { 00:18:28.981 "trtype": "TCP", 00:18:28.981 "adrfam": "IPv4", 00:18:28.981 "traddr": "10.0.0.1", 00:18:28.981 "trsvcid": "47700" 00:18:28.981 }, 00:18:28.981 "auth": { 00:18:28.981 "state": "completed", 00:18:28.981 "digest": "sha384", 00:18:28.981 "dhgroup": "null" 00:18:28.981 } 00:18:28.981 } 00:18:28.981 ]' 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.981 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.238 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.238 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:29.238 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.238 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.238 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.238 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.496 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:29.496 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:30.060 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.318 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.318 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.575 { 00:18:30.575 "cntlid": 53, 00:18:30.575 "qid": 0, 00:18:30.575 "state": "enabled", 00:18:30.575 "thread": "nvmf_tgt_poll_group_000", 00:18:30.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:30.575 "listen_address": { 00:18:30.575 "trtype": "TCP", 00:18:30.575 "adrfam": "IPv4", 00:18:30.575 "traddr": "10.0.0.2", 00:18:30.575 "trsvcid": "4420" 00:18:30.575 }, 00:18:30.575 "peer_address": { 00:18:30.575 "trtype": "TCP", 00:18:30.575 "adrfam": "IPv4", 00:18:30.575 "traddr": "10.0.0.1", 00:18:30.575 "trsvcid": "47720" 00:18:30.575 }, 00:18:30.575 "auth": { 00:18:30.575 "state": "completed", 00:18:30.575 "digest": "sha384", 00:18:30.575 "dhgroup": "null" 00:18:30.575 } 00:18:30.575 } 00:18:30.575 ]' 00:18:30.575 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.833 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.090 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:31.090 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.655 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.912 00:18:31.912 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.912 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.912 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.169 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.169 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.169 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.169 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.169 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.170 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.170 { 00:18:32.170 "cntlid": 55, 00:18:32.170 "qid": 0, 00:18:32.170 "state": "enabled", 00:18:32.170 "thread": "nvmf_tgt_poll_group_000", 00:18:32.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:32.170 "listen_address": { 00:18:32.170 "trtype": "TCP", 00:18:32.170 "adrfam": "IPv4", 00:18:32.170 "traddr": "10.0.0.2", 00:18:32.170 "trsvcid": "4420" 00:18:32.170 }, 00:18:32.170 "peer_address": { 00:18:32.170 "trtype": "TCP", 00:18:32.170 "adrfam": "IPv4", 00:18:32.170 "traddr": "10.0.0.1", 00:18:32.170 "trsvcid": "47738" 00:18:32.170 }, 00:18:32.170 "auth": { 00:18:32.170 "state": "completed", 00:18:32.170 "digest": "sha384", 00:18:32.170 "dhgroup": "null" 00:18:32.170 } 00:18:32.170 } 00:18:32.170 ]' 00:18:32.170 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.170 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.170 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.427 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:32.427 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.427 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.427 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.427 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.684 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:32.684 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.249 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.249 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.507 00:18:33.507 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.507 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.507 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.764 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.764 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.764 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.764 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.764 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.764 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.764 { 00:18:33.764 "cntlid": 57, 00:18:33.764 "qid": 0, 00:18:33.764 "state": "enabled", 00:18:33.764 "thread": "nvmf_tgt_poll_group_000", 00:18:33.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:33.765 "listen_address": { 00:18:33.765 "trtype": "TCP", 00:18:33.765 "adrfam": "IPv4", 00:18:33.765 "traddr": "10.0.0.2", 00:18:33.765 "trsvcid": "4420" 00:18:33.765 }, 00:18:33.765 "peer_address": { 00:18:33.765 "trtype": "TCP", 00:18:33.765 "adrfam": "IPv4", 00:18:33.765 "traddr": "10.0.0.1", 00:18:33.765 "trsvcid": "47774" 00:18:33.765 }, 00:18:33.765 "auth": { 00:18:33.765 "state": "completed", 00:18:33.765 "digest": "sha384", 00:18:33.765 "dhgroup": "ffdhe2048" 00:18:33.765 } 00:18:33.765 } 00:18:33.765 ]' 00:18:33.765 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.765 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.765 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.765 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.765 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.022 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.022 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.022 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.022 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:34.022 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.587 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.844 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.102 00:18:35.102 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.102 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.102 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.359 { 00:18:35.359 "cntlid": 59, 00:18:35.359 "qid": 0, 00:18:35.359 "state": "enabled", 00:18:35.359 "thread": "nvmf_tgt_poll_group_000", 00:18:35.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:35.359 "listen_address": { 00:18:35.359 "trtype": "TCP", 00:18:35.359 "adrfam": "IPv4", 00:18:35.359 "traddr": "10.0.0.2", 00:18:35.359 "trsvcid": "4420" 00:18:35.359 }, 00:18:35.359 "peer_address": { 00:18:35.359 "trtype": "TCP", 00:18:35.359 "adrfam": "IPv4", 00:18:35.359 "traddr": "10.0.0.1", 00:18:35.359 "trsvcid": "47798" 00:18:35.359 }, 00:18:35.359 "auth": { 00:18:35.359 "state": "completed", 00:18:35.359 "digest": "sha384", 00:18:35.359 "dhgroup": "ffdhe2048" 00:18:35.359 } 00:18:35.359 } 00:18:35.359 ]' 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.359 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.617 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:35.617 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.181 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.182 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.439 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.697 00:18:36.697 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.697 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.697 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.955 { 00:18:36.955 "cntlid": 61, 00:18:36.955 "qid": 0, 00:18:36.955 "state": "enabled", 00:18:36.955 "thread": "nvmf_tgt_poll_group_000", 00:18:36.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:36.955 "listen_address": { 00:18:36.955 "trtype": "TCP", 00:18:36.955 "adrfam": "IPv4", 00:18:36.955 "traddr": "10.0.0.2", 00:18:36.955 "trsvcid": "4420" 00:18:36.955 }, 00:18:36.955 "peer_address": { 00:18:36.955 "trtype": "TCP", 00:18:36.955 "adrfam": "IPv4", 00:18:36.955 "traddr": "10.0.0.1", 00:18:36.955 "trsvcid": "45246" 00:18:36.955 }, 00:18:36.955 "auth": { 00:18:36.955 "state": "completed", 00:18:36.955 "digest": "sha384", 00:18:36.955 "dhgroup": "ffdhe2048" 00:18:36.955 } 00:18:36.955 } 00:18:36.955 ]' 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.955 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.212 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:37.212 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.777 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.035 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.292 00:18:38.292 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.292 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.292 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.550 { 00:18:38.550 "cntlid": 63, 00:18:38.550 "qid": 0, 00:18:38.550 "state": "enabled", 00:18:38.550 "thread": "nvmf_tgt_poll_group_000", 00:18:38.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:38.550 "listen_address": { 00:18:38.550 "trtype": "TCP", 00:18:38.550 "adrfam": "IPv4", 00:18:38.550 "traddr": "10.0.0.2", 00:18:38.550 "trsvcid": "4420" 00:18:38.550 }, 00:18:38.550 "peer_address": { 00:18:38.550 "trtype": "TCP", 00:18:38.550 "adrfam": "IPv4", 00:18:38.550 "traddr": "10.0.0.1", 00:18:38.550 "trsvcid": "45276" 00:18:38.550 }, 00:18:38.550 "auth": { 00:18:38.550 "state": "completed", 00:18:38.550 "digest": "sha384", 00:18:38.550 "dhgroup": "ffdhe2048" 00:18:38.550 } 00:18:38.550 } 00:18:38.550 ]' 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.550 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.807 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:38.807 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.372 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.630 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.888 00:18:39.888 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.888 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.888 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.145 { 00:18:40.145 "cntlid": 65, 00:18:40.145 "qid": 0, 00:18:40.145 "state": "enabled", 00:18:40.145 "thread": "nvmf_tgt_poll_group_000", 00:18:40.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:40.145 "listen_address": { 00:18:40.145 "trtype": "TCP", 00:18:40.145 "adrfam": "IPv4", 00:18:40.145 "traddr": "10.0.0.2", 00:18:40.145 "trsvcid": "4420" 00:18:40.145 }, 00:18:40.145 "peer_address": { 00:18:40.145 "trtype": "TCP", 00:18:40.145 "adrfam": "IPv4", 00:18:40.145 "traddr": "10.0.0.1", 00:18:40.145 "trsvcid": "45304" 00:18:40.145 }, 00:18:40.145 "auth": { 00:18:40.145 "state": "completed", 00:18:40.145 "digest": "sha384", 00:18:40.145 "dhgroup": "ffdhe3072" 00:18:40.145 } 00:18:40.145 } 00:18:40.145 ]' 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.145 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.145 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.145 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.145 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.145 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.145 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.403 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:40.403 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.968 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.226 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.483 00:18:41.483 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.483 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.483 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.741 { 00:18:41.741 "cntlid": 67, 00:18:41.741 "qid": 0, 00:18:41.741 "state": "enabled", 00:18:41.741 "thread": "nvmf_tgt_poll_group_000", 00:18:41.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:41.741 "listen_address": { 00:18:41.741 "trtype": "TCP", 00:18:41.741 "adrfam": "IPv4", 00:18:41.741 "traddr": "10.0.0.2", 00:18:41.741 "trsvcid": "4420" 00:18:41.741 }, 00:18:41.741 "peer_address": { 00:18:41.741 "trtype": "TCP", 00:18:41.741 "adrfam": "IPv4", 00:18:41.741 "traddr": "10.0.0.1", 00:18:41.741 "trsvcid": "45324" 00:18:41.741 }, 00:18:41.741 "auth": { 00:18:41.741 "state": "completed", 00:18:41.741 "digest": "sha384", 00:18:41.741 "dhgroup": "ffdhe3072" 00:18:41.741 } 00:18:41.741 } 00:18:41.741 ]' 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.741 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.998 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:41.998 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.563 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.820 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.078 00:18:43.078 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.078 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.078 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.335 { 00:18:43.335 "cntlid": 69, 00:18:43.335 "qid": 0, 00:18:43.335 "state": "enabled", 00:18:43.335 "thread": "nvmf_tgt_poll_group_000", 00:18:43.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:43.335 "listen_address": { 00:18:43.335 "trtype": "TCP", 00:18:43.335 "adrfam": "IPv4", 00:18:43.335 "traddr": "10.0.0.2", 00:18:43.335 "trsvcid": "4420" 00:18:43.335 }, 00:18:43.335 "peer_address": { 00:18:43.335 "trtype": "TCP", 00:18:43.335 "adrfam": "IPv4", 00:18:43.335 "traddr": "10.0.0.1", 00:18:43.335 "trsvcid": "45364" 00:18:43.335 }, 00:18:43.335 "auth": { 00:18:43.335 "state": "completed", 00:18:43.335 "digest": "sha384", 00:18:43.335 "dhgroup": "ffdhe3072" 00:18:43.335 } 00:18:43.335 } 00:18:43.335 ]' 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.335 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.593 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:43.593 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.156 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.415 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.672 00:18:44.672 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.672 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.672 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.930 { 00:18:44.930 "cntlid": 71, 00:18:44.930 "qid": 0, 00:18:44.930 "state": "enabled", 00:18:44.930 "thread": "nvmf_tgt_poll_group_000", 00:18:44.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:44.930 "listen_address": { 00:18:44.930 "trtype": "TCP", 00:18:44.930 "adrfam": "IPv4", 00:18:44.930 "traddr": "10.0.0.2", 00:18:44.930 "trsvcid": "4420" 00:18:44.930 }, 00:18:44.930 "peer_address": { 00:18:44.930 "trtype": "TCP", 00:18:44.930 "adrfam": "IPv4", 00:18:44.930 "traddr": "10.0.0.1", 00:18:44.930 "trsvcid": "45384" 00:18:44.930 }, 00:18:44.930 "auth": { 00:18:44.930 "state": "completed", 00:18:44.930 "digest": "sha384", 00:18:44.930 "dhgroup": "ffdhe3072" 00:18:44.930 } 00:18:44.930 } 00:18:44.930 ]' 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.930 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.187 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:45.187 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.752 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.009 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.266 00:18:46.266 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.267 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.267 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.524 { 00:18:46.524 "cntlid": 73, 00:18:46.524 "qid": 0, 00:18:46.524 "state": "enabled", 00:18:46.524 "thread": "nvmf_tgt_poll_group_000", 00:18:46.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:46.524 "listen_address": { 00:18:46.524 "trtype": "TCP", 00:18:46.524 "adrfam": "IPv4", 00:18:46.524 "traddr": "10.0.0.2", 00:18:46.524 "trsvcid": "4420" 00:18:46.524 }, 00:18:46.524 "peer_address": { 00:18:46.524 "trtype": "TCP", 00:18:46.524 "adrfam": "IPv4", 00:18:46.524 "traddr": "10.0.0.1", 00:18:46.524 "trsvcid": "45414" 00:18:46.524 }, 00:18:46.524 "auth": { 00:18:46.524 "state": "completed", 00:18:46.524 "digest": "sha384", 00:18:46.524 "dhgroup": "ffdhe4096" 00:18:46.524 } 00:18:46.524 } 00:18:46.524 ]' 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.524 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.782 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:46.782 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.347 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.604 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:47.604 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.604 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.604 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.605 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.862 00:18:47.862 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.862 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.862 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.119 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.119 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.119 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.120 { 00:18:48.120 "cntlid": 75, 00:18:48.120 "qid": 0, 00:18:48.120 "state": "enabled", 00:18:48.120 "thread": "nvmf_tgt_poll_group_000", 00:18:48.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:48.120 "listen_address": { 00:18:48.120 "trtype": "TCP", 00:18:48.120 "adrfam": "IPv4", 00:18:48.120 "traddr": "10.0.0.2", 00:18:48.120 "trsvcid": "4420" 00:18:48.120 }, 00:18:48.120 "peer_address": { 00:18:48.120 "trtype": "TCP", 00:18:48.120 "adrfam": "IPv4", 00:18:48.120 "traddr": "10.0.0.1", 00:18:48.120 "trsvcid": "33748" 00:18:48.120 }, 00:18:48.120 "auth": { 00:18:48.120 "state": "completed", 00:18:48.120 "digest": "sha384", 00:18:48.120 "dhgroup": "ffdhe4096" 00:18:48.120 } 00:18:48.120 } 00:18:48.120 ]' 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.120 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.120 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.120 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.120 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.378 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:48.378 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:48.942 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.942 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:48.942 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.943 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.943 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.943 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.943 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.943 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.200 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.457 00:18:49.457 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.457 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.457 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.715 { 00:18:49.715 "cntlid": 77, 00:18:49.715 "qid": 0, 00:18:49.715 "state": "enabled", 00:18:49.715 "thread": "nvmf_tgt_poll_group_000", 00:18:49.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:49.715 "listen_address": { 00:18:49.715 "trtype": "TCP", 00:18:49.715 "adrfam": "IPv4", 00:18:49.715 "traddr": "10.0.0.2", 00:18:49.715 "trsvcid": "4420" 00:18:49.715 }, 00:18:49.715 "peer_address": { 00:18:49.715 "trtype": "TCP", 00:18:49.715 "adrfam": "IPv4", 00:18:49.715 "traddr": "10.0.0.1", 00:18:49.715 "trsvcid": "33776" 00:18:49.715 }, 00:18:49.715 "auth": { 00:18:49.715 "state": "completed", 00:18:49.715 "digest": "sha384", 00:18:49.715 "dhgroup": "ffdhe4096" 00:18:49.715 } 00:18:49.715 } 00:18:49.715 ]' 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.715 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.973 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:49.973 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.538 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.794 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.795 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.051 00:18:51.051 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.051 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.051 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.309 { 00:18:51.309 "cntlid": 79, 00:18:51.309 "qid": 0, 00:18:51.309 "state": "enabled", 00:18:51.309 "thread": "nvmf_tgt_poll_group_000", 00:18:51.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:51.309 "listen_address": { 00:18:51.309 "trtype": "TCP", 00:18:51.309 "adrfam": "IPv4", 00:18:51.309 "traddr": "10.0.0.2", 00:18:51.309 "trsvcid": "4420" 00:18:51.309 }, 00:18:51.309 "peer_address": { 00:18:51.309 "trtype": "TCP", 00:18:51.309 "adrfam": "IPv4", 00:18:51.309 "traddr": "10.0.0.1", 00:18:51.309 "trsvcid": "33822" 00:18:51.309 }, 00:18:51.309 "auth": { 00:18:51.309 "state": "completed", 00:18:51.309 "digest": "sha384", 00:18:51.309 "dhgroup": "ffdhe4096" 00:18:51.309 } 00:18:51.309 } 00:18:51.309 ]' 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.309 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.566 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.566 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.566 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.566 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:51.566 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.131 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.389 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.954 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.954 { 00:18:52.954 "cntlid": 81, 00:18:52.954 "qid": 0, 00:18:52.954 "state": "enabled", 00:18:52.954 "thread": "nvmf_tgt_poll_group_000", 00:18:52.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:52.954 "listen_address": { 00:18:52.954 "trtype": "TCP", 00:18:52.954 "adrfam": "IPv4", 00:18:52.954 "traddr": "10.0.0.2", 00:18:52.954 "trsvcid": "4420" 00:18:52.954 }, 00:18:52.954 "peer_address": { 00:18:52.954 "trtype": "TCP", 00:18:52.954 "adrfam": "IPv4", 00:18:52.954 "traddr": "10.0.0.1", 00:18:52.954 "trsvcid": "33844" 00:18:52.954 }, 00:18:52.954 "auth": { 00:18:52.954 "state": "completed", 00:18:52.954 "digest": "sha384", 00:18:52.954 "dhgroup": "ffdhe6144" 00:18:52.954 } 00:18:52.954 } 00:18:52.954 ]' 00:18:52.954 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.212 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.212 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.212 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.470 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:53.470 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.036 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.601 00:18:54.601 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.601 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.601 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.601 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.601 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.601 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.602 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.602 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.602 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.602 { 00:18:54.602 "cntlid": 83, 00:18:54.602 "qid": 0, 00:18:54.602 "state": "enabled", 00:18:54.602 "thread": "nvmf_tgt_poll_group_000", 00:18:54.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:54.602 "listen_address": { 00:18:54.602 "trtype": "TCP", 00:18:54.602 "adrfam": "IPv4", 00:18:54.602 "traddr": "10.0.0.2", 00:18:54.602 "trsvcid": "4420" 00:18:54.602 }, 00:18:54.602 "peer_address": { 00:18:54.602 "trtype": "TCP", 00:18:54.602 "adrfam": "IPv4", 00:18:54.602 "traddr": "10.0.0.1", 00:18:54.602 "trsvcid": "33864" 00:18:54.602 }, 00:18:54.602 "auth": { 00:18:54.602 "state": "completed", 00:18:54.602 "digest": "sha384", 00:18:54.602 "dhgroup": "ffdhe6144" 00:18:54.602 } 00:18:54.602 } 00:18:54.602 ]' 00:18:54.602 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.859 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.117 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:55.117 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.682 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.940 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.197 00:18:56.197 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.197 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.197 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.456 { 00:18:56.456 "cntlid": 85, 00:18:56.456 "qid": 0, 00:18:56.456 "state": "enabled", 00:18:56.456 "thread": "nvmf_tgt_poll_group_000", 00:18:56.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:56.456 "listen_address": { 00:18:56.456 "trtype": "TCP", 00:18:56.456 "adrfam": "IPv4", 00:18:56.456 "traddr": "10.0.0.2", 00:18:56.456 "trsvcid": "4420" 00:18:56.456 }, 00:18:56.456 "peer_address": { 00:18:56.456 "trtype": "TCP", 00:18:56.456 "adrfam": "IPv4", 00:18:56.456 "traddr": "10.0.0.1", 00:18:56.456 "trsvcid": "33902" 00:18:56.456 }, 00:18:56.456 "auth": { 00:18:56.456 "state": "completed", 00:18:56.456 "digest": "sha384", 00:18:56.456 "dhgroup": "ffdhe6144" 00:18:56.456 } 00:18:56.456 } 00:18:56.456 ]' 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.456 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.714 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:56.714 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.280 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.538 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.795 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.053 { 00:18:58.053 "cntlid": 87, 00:18:58.053 "qid": 0, 00:18:58.053 "state": "enabled", 00:18:58.053 "thread": "nvmf_tgt_poll_group_000", 00:18:58.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:58.053 "listen_address": { 00:18:58.053 "trtype": "TCP", 00:18:58.053 "adrfam": "IPv4", 00:18:58.053 "traddr": "10.0.0.2", 00:18:58.053 "trsvcid": "4420" 00:18:58.053 }, 00:18:58.053 "peer_address": { 00:18:58.053 "trtype": "TCP", 00:18:58.053 "adrfam": "IPv4", 00:18:58.053 "traddr": "10.0.0.1", 00:18:58.053 "trsvcid": "33334" 00:18:58.053 }, 00:18:58.053 "auth": { 00:18:58.053 "state": "completed", 00:18:58.053 "digest": "sha384", 00:18:58.053 "dhgroup": "ffdhe6144" 00:18:58.053 } 00:18:58.053 } 00:18:58.053 ]' 00:18:58.053 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.053 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.311 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.311 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.311 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.311 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.311 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.311 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.569 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:58.569 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.134 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.391 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.392 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.392 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.392 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.649 00:18:59.649 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.649 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.649 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.907 { 00:18:59.907 "cntlid": 89, 00:18:59.907 "qid": 0, 00:18:59.907 "state": "enabled", 00:18:59.907 "thread": "nvmf_tgt_poll_group_000", 00:18:59.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:59.907 "listen_address": { 00:18:59.907 "trtype": "TCP", 00:18:59.907 "adrfam": "IPv4", 00:18:59.907 "traddr": "10.0.0.2", 00:18:59.907 "trsvcid": "4420" 00:18:59.907 }, 00:18:59.907 "peer_address": { 00:18:59.907 "trtype": "TCP", 00:18:59.907 "adrfam": "IPv4", 00:18:59.907 "traddr": "10.0.0.1", 00:18:59.907 "trsvcid": "33350" 00:18:59.907 }, 00:18:59.907 "auth": { 00:18:59.907 "state": "completed", 00:18:59.907 "digest": "sha384", 00:18:59.907 "dhgroup": "ffdhe8192" 00:18:59.907 } 00:18:59.907 } 00:18:59.907 ]' 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.907 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.165 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.165 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.165 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.165 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.165 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.423 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:00.423 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:00.988 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.989 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.555 00:19:01.555 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.555 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.555 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.813 { 00:19:01.813 "cntlid": 91, 00:19:01.813 "qid": 0, 00:19:01.813 "state": "enabled", 00:19:01.813 "thread": "nvmf_tgt_poll_group_000", 00:19:01.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:01.813 "listen_address": { 00:19:01.813 "trtype": "TCP", 00:19:01.813 "adrfam": "IPv4", 00:19:01.813 "traddr": "10.0.0.2", 00:19:01.813 "trsvcid": "4420" 00:19:01.813 }, 00:19:01.813 "peer_address": { 00:19:01.813 "trtype": "TCP", 00:19:01.813 "adrfam": "IPv4", 00:19:01.813 "traddr": "10.0.0.1", 00:19:01.813 "trsvcid": "33370" 00:19:01.813 }, 00:19:01.813 "auth": { 00:19:01.813 "state": "completed", 00:19:01.813 "digest": "sha384", 00:19:01.813 "dhgroup": "ffdhe8192" 00:19:01.813 } 00:19:01.813 } 00:19:01.813 ]' 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.813 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.071 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:02.071 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.636 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.894 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.895 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.895 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.895 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.895 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.895 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.895 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.460 00:19:03.460 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.460 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.460 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.719 { 00:19:03.719 "cntlid": 93, 00:19:03.719 "qid": 0, 00:19:03.719 "state": "enabled", 00:19:03.719 "thread": "nvmf_tgt_poll_group_000", 00:19:03.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:03.719 "listen_address": { 00:19:03.719 "trtype": "TCP", 00:19:03.719 "adrfam": "IPv4", 00:19:03.719 "traddr": "10.0.0.2", 00:19:03.719 "trsvcid": "4420" 00:19:03.719 }, 00:19:03.719 "peer_address": { 00:19:03.719 "trtype": "TCP", 00:19:03.719 "adrfam": "IPv4", 00:19:03.719 "traddr": "10.0.0.1", 00:19:03.719 "trsvcid": "33406" 00:19:03.719 }, 00:19:03.719 "auth": { 00:19:03.719 "state": "completed", 00:19:03.719 "digest": "sha384", 00:19:03.719 "dhgroup": "ffdhe8192" 00:19:03.719 } 00:19:03.719 } 00:19:03.719 ]' 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.719 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.979 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:03.979 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.581 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.854 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.145 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.432 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.432 { 00:19:05.432 "cntlid": 95, 00:19:05.432 "qid": 0, 00:19:05.432 "state": "enabled", 00:19:05.432 "thread": "nvmf_tgt_poll_group_000", 00:19:05.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:05.432 "listen_address": { 00:19:05.432 "trtype": "TCP", 00:19:05.432 "adrfam": "IPv4", 00:19:05.432 "traddr": "10.0.0.2", 00:19:05.432 "trsvcid": "4420" 00:19:05.433 }, 00:19:05.433 "peer_address": { 00:19:05.433 "trtype": "TCP", 00:19:05.433 "adrfam": "IPv4", 00:19:05.433 "traddr": "10.0.0.1", 00:19:05.433 "trsvcid": "33434" 00:19:05.433 }, 00:19:05.433 "auth": { 00:19:05.433 "state": "completed", 00:19:05.433 "digest": "sha384", 00:19:05.433 "dhgroup": "ffdhe8192" 00:19:05.433 } 00:19:05.433 } 00:19:05.433 ]' 00:19:05.433 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.433 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.433 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.690 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.690 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.690 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.690 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.690 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.948 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:05.948 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.515 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.773 00:19:06.773 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.773 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.773 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.031 { 00:19:07.031 "cntlid": 97, 00:19:07.031 "qid": 0, 00:19:07.031 "state": "enabled", 00:19:07.031 "thread": "nvmf_tgt_poll_group_000", 00:19:07.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:07.031 "listen_address": { 00:19:07.031 "trtype": "TCP", 00:19:07.031 "adrfam": "IPv4", 00:19:07.031 "traddr": "10.0.0.2", 00:19:07.031 "trsvcid": "4420" 00:19:07.031 }, 00:19:07.031 "peer_address": { 00:19:07.031 "trtype": "TCP", 00:19:07.031 "adrfam": "IPv4", 00:19:07.031 "traddr": "10.0.0.1", 00:19:07.031 "trsvcid": "38378" 00:19:07.031 }, 00:19:07.031 "auth": { 00:19:07.031 "state": "completed", 00:19:07.031 "digest": "sha512", 00:19:07.031 "dhgroup": "null" 00:19:07.031 } 00:19:07.031 } 00:19:07.031 ]' 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.031 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.290 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.290 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.290 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.290 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.290 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.290 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:07.290 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.856 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.114 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.114 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.114 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.114 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.114 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.372 00:19:08.372 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.372 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.372 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.630 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.630 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.630 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.630 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.630 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.630 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.630 { 00:19:08.630 "cntlid": 99, 00:19:08.630 "qid": 0, 00:19:08.630 "state": "enabled", 00:19:08.630 "thread": "nvmf_tgt_poll_group_000", 00:19:08.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:08.630 "listen_address": { 00:19:08.630 "trtype": "TCP", 00:19:08.630 "adrfam": "IPv4", 00:19:08.630 "traddr": "10.0.0.2", 00:19:08.630 "trsvcid": "4420" 00:19:08.630 }, 00:19:08.630 "peer_address": { 00:19:08.630 "trtype": "TCP", 00:19:08.630 "adrfam": "IPv4", 00:19:08.630 "traddr": "10.0.0.1", 00:19:08.630 "trsvcid": "38418" 00:19:08.630 }, 00:19:08.630 "auth": { 00:19:08.630 "state": "completed", 00:19:08.630 "digest": "sha512", 00:19:08.630 "dhgroup": "null" 00:19:08.630 } 00:19:08.630 } 00:19:08.630 ]' 00:19:08.631 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.631 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.631 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.631 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.631 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.888 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.888 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.888 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.888 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:08.888 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.454 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.712 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.970 00:19:09.970 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.970 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.971 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.229 { 00:19:10.229 "cntlid": 101, 00:19:10.229 "qid": 0, 00:19:10.229 "state": "enabled", 00:19:10.229 "thread": "nvmf_tgt_poll_group_000", 00:19:10.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:10.229 "listen_address": { 00:19:10.229 "trtype": "TCP", 00:19:10.229 "adrfam": "IPv4", 00:19:10.229 "traddr": "10.0.0.2", 00:19:10.229 "trsvcid": "4420" 00:19:10.229 }, 00:19:10.229 "peer_address": { 00:19:10.229 "trtype": "TCP", 00:19:10.229 "adrfam": "IPv4", 00:19:10.229 "traddr": "10.0.0.1", 00:19:10.229 "trsvcid": "38436" 00:19:10.229 }, 00:19:10.229 "auth": { 00:19:10.229 "state": "completed", 00:19:10.229 "digest": "sha512", 00:19:10.229 "dhgroup": "null" 00:19:10.229 } 00:19:10.229 } 00:19:10.229 ]' 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.229 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.486 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:10.486 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.052 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.310 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.568 00:19:11.568 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.568 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.568 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.826 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.827 { 00:19:11.827 "cntlid": 103, 00:19:11.827 "qid": 0, 00:19:11.827 "state": "enabled", 00:19:11.827 "thread": "nvmf_tgt_poll_group_000", 00:19:11.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:11.827 "listen_address": { 00:19:11.827 "trtype": "TCP", 00:19:11.827 "adrfam": "IPv4", 00:19:11.827 "traddr": "10.0.0.2", 00:19:11.827 "trsvcid": "4420" 00:19:11.827 }, 00:19:11.827 "peer_address": { 00:19:11.827 "trtype": "TCP", 00:19:11.827 "adrfam": "IPv4", 00:19:11.827 "traddr": "10.0.0.1", 00:19:11.827 "trsvcid": "38462" 00:19:11.827 }, 00:19:11.827 "auth": { 00:19:11.827 "state": "completed", 00:19:11.827 "digest": "sha512", 00:19:11.827 "dhgroup": "null" 00:19:11.827 } 00:19:11.827 } 00:19:11.827 ]' 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.827 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.085 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:12.085 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.651 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.909 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.167 00:19:13.167 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.167 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.167 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.425 { 00:19:13.425 "cntlid": 105, 00:19:13.425 "qid": 0, 00:19:13.425 "state": "enabled", 00:19:13.425 "thread": "nvmf_tgt_poll_group_000", 00:19:13.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:13.425 "listen_address": { 00:19:13.425 "trtype": "TCP", 00:19:13.425 "adrfam": "IPv4", 00:19:13.425 "traddr": "10.0.0.2", 00:19:13.425 "trsvcid": "4420" 00:19:13.425 }, 00:19:13.425 "peer_address": { 00:19:13.425 "trtype": "TCP", 00:19:13.425 "adrfam": "IPv4", 00:19:13.425 "traddr": "10.0.0.1", 00:19:13.425 "trsvcid": "38494" 00:19:13.425 }, 00:19:13.425 "auth": { 00:19:13.425 "state": "completed", 00:19:13.425 "digest": "sha512", 00:19:13.425 "dhgroup": "ffdhe2048" 00:19:13.425 } 00:19:13.425 } 00:19:13.425 ]' 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.425 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.682 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:13.682 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.505 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.763 00:19:14.763 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.763 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.763 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.021 { 00:19:15.021 "cntlid": 107, 00:19:15.021 "qid": 0, 00:19:15.021 "state": "enabled", 00:19:15.021 "thread": "nvmf_tgt_poll_group_000", 00:19:15.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:15.021 "listen_address": { 00:19:15.021 "trtype": "TCP", 00:19:15.021 "adrfam": "IPv4", 00:19:15.021 "traddr": "10.0.0.2", 00:19:15.021 "trsvcid": "4420" 00:19:15.021 }, 00:19:15.021 "peer_address": { 00:19:15.021 "trtype": "TCP", 00:19:15.021 "adrfam": "IPv4", 00:19:15.021 "traddr": "10.0.0.1", 00:19:15.021 "trsvcid": "38510" 00:19:15.021 }, 00:19:15.021 "auth": { 00:19:15.021 "state": "completed", 00:19:15.021 "digest": "sha512", 00:19:15.021 "dhgroup": "ffdhe2048" 00:19:15.021 } 00:19:15.021 } 00:19:15.021 ]' 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.021 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.279 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:15.279 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.845 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.102 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.103 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.360 00:19:16.360 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.360 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.360 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.618 { 00:19:16.618 "cntlid": 109, 00:19:16.618 "qid": 0, 00:19:16.618 "state": "enabled", 00:19:16.618 "thread": "nvmf_tgt_poll_group_000", 00:19:16.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:16.618 "listen_address": { 00:19:16.618 "trtype": "TCP", 00:19:16.618 "adrfam": "IPv4", 00:19:16.618 "traddr": "10.0.0.2", 00:19:16.618 "trsvcid": "4420" 00:19:16.618 }, 00:19:16.618 "peer_address": { 00:19:16.618 "trtype": "TCP", 00:19:16.618 "adrfam": "IPv4", 00:19:16.618 "traddr": "10.0.0.1", 00:19:16.618 "trsvcid": "38524" 00:19:16.618 }, 00:19:16.618 "auth": { 00:19:16.618 "state": "completed", 00:19:16.618 "digest": "sha512", 00:19:16.618 "dhgroup": "ffdhe2048" 00:19:16.618 } 00:19:16.618 } 00:19:16.618 ]' 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.618 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.876 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:16.876 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.442 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.699 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.957 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.957 { 00:19:17.957 "cntlid": 111, 00:19:17.957 "qid": 0, 00:19:17.957 "state": "enabled", 00:19:17.957 "thread": "nvmf_tgt_poll_group_000", 00:19:17.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:17.957 "listen_address": { 00:19:17.957 "trtype": "TCP", 00:19:17.957 "adrfam": "IPv4", 00:19:17.957 "traddr": "10.0.0.2", 00:19:17.957 "trsvcid": "4420" 00:19:17.957 }, 00:19:17.957 "peer_address": { 00:19:17.957 "trtype": "TCP", 00:19:17.957 "adrfam": "IPv4", 00:19:17.957 "traddr": "10.0.0.1", 00:19:17.957 "trsvcid": "38788" 00:19:17.957 }, 00:19:17.957 "auth": { 00:19:17.957 "state": "completed", 00:19:17.957 "digest": "sha512", 00:19:17.957 "dhgroup": "ffdhe2048" 00:19:17.957 } 00:19:17.957 } 00:19:17.957 ]' 00:19:17.957 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.215 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.473 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:18.473 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.037 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.038 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.295 00:19:19.295 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.295 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.295 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.552 { 00:19:19.552 "cntlid": 113, 00:19:19.552 "qid": 0, 00:19:19.552 "state": "enabled", 00:19:19.552 "thread": "nvmf_tgt_poll_group_000", 00:19:19.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:19.552 "listen_address": { 00:19:19.552 "trtype": "TCP", 00:19:19.552 "adrfam": "IPv4", 00:19:19.552 "traddr": "10.0.0.2", 00:19:19.552 "trsvcid": "4420" 00:19:19.552 }, 00:19:19.552 "peer_address": { 00:19:19.552 "trtype": "TCP", 00:19:19.552 "adrfam": "IPv4", 00:19:19.552 "traddr": "10.0.0.1", 00:19:19.552 "trsvcid": "38824" 00:19:19.552 }, 00:19:19.552 "auth": { 00:19:19.552 "state": "completed", 00:19:19.552 "digest": "sha512", 00:19:19.552 "dhgroup": "ffdhe3072" 00:19:19.552 } 00:19:19.552 } 00:19:19.552 ]' 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.552 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:19.810 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:20.375 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.375 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:20.375 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.375 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.632 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.633 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.890 00:19:20.890 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.890 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.890 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.147 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.147 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.148 { 00:19:21.148 "cntlid": 115, 00:19:21.148 "qid": 0, 00:19:21.148 "state": "enabled", 00:19:21.148 "thread": "nvmf_tgt_poll_group_000", 00:19:21.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:21.148 "listen_address": { 00:19:21.148 "trtype": "TCP", 00:19:21.148 "adrfam": "IPv4", 00:19:21.148 "traddr": "10.0.0.2", 00:19:21.148 "trsvcid": "4420" 00:19:21.148 }, 00:19:21.148 "peer_address": { 00:19:21.148 "trtype": "TCP", 00:19:21.148 "adrfam": "IPv4", 00:19:21.148 "traddr": "10.0.0.1", 00:19:21.148 "trsvcid": "38850" 00:19:21.148 }, 00:19:21.148 "auth": { 00:19:21.148 "state": "completed", 00:19:21.148 "digest": "sha512", 00:19:21.148 "dhgroup": "ffdhe3072" 00:19:21.148 } 00:19:21.148 } 00:19:21.148 ]' 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.148 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.405 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.405 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.405 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.405 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.405 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.663 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:21.663 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.230 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.230 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.487 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.745 { 00:19:22.745 "cntlid": 117, 00:19:22.745 "qid": 0, 00:19:22.745 "state": "enabled", 00:19:22.745 "thread": "nvmf_tgt_poll_group_000", 00:19:22.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:22.745 "listen_address": { 00:19:22.745 "trtype": "TCP", 00:19:22.745 "adrfam": "IPv4", 00:19:22.745 "traddr": "10.0.0.2", 00:19:22.745 "trsvcid": "4420" 00:19:22.745 }, 00:19:22.745 "peer_address": { 00:19:22.745 "trtype": "TCP", 00:19:22.745 "adrfam": "IPv4", 00:19:22.745 "traddr": "10.0.0.1", 00:19:22.745 "trsvcid": "38890" 00:19:22.745 }, 00:19:22.745 "auth": { 00:19:22.745 "state": "completed", 00:19:22.745 "digest": "sha512", 00:19:22.745 "dhgroup": "ffdhe3072" 00:19:22.745 } 00:19:22.745 } 00:19:22.745 ]' 00:19:22.745 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.003 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.261 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:23.261 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.827 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.085 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.343 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.343 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.602 { 00:19:24.602 "cntlid": 119, 00:19:24.602 "qid": 0, 00:19:24.602 "state": "enabled", 00:19:24.602 "thread": "nvmf_tgt_poll_group_000", 00:19:24.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:24.602 "listen_address": { 00:19:24.602 "trtype": "TCP", 00:19:24.602 "adrfam": "IPv4", 00:19:24.602 "traddr": "10.0.0.2", 00:19:24.602 "trsvcid": "4420" 00:19:24.602 }, 00:19:24.602 "peer_address": { 00:19:24.602 "trtype": "TCP", 00:19:24.602 "adrfam": "IPv4", 00:19:24.602 "traddr": "10.0.0.1", 00:19:24.602 "trsvcid": "38908" 00:19:24.602 }, 00:19:24.602 "auth": { 00:19:24.602 "state": "completed", 00:19:24.602 "digest": "sha512", 00:19:24.602 "dhgroup": "ffdhe3072" 00:19:24.602 } 00:19:24.602 } 00:19:24.602 ]' 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.602 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.860 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:24.860 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.425 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.683 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.941 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.941 { 00:19:25.941 "cntlid": 121, 00:19:25.941 "qid": 0, 00:19:25.941 "state": "enabled", 00:19:25.941 "thread": "nvmf_tgt_poll_group_000", 00:19:25.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:25.941 "listen_address": { 00:19:25.941 "trtype": "TCP", 00:19:25.941 "adrfam": "IPv4", 00:19:25.941 "traddr": "10.0.0.2", 00:19:25.941 "trsvcid": "4420" 00:19:25.941 }, 00:19:25.941 "peer_address": { 00:19:25.941 "trtype": "TCP", 00:19:25.941 "adrfam": "IPv4", 00:19:25.941 "traddr": "10.0.0.1", 00:19:25.941 "trsvcid": "38934" 00:19:25.941 }, 00:19:25.941 "auth": { 00:19:25.941 "state": "completed", 00:19:25.941 "digest": "sha512", 00:19:25.941 "dhgroup": "ffdhe4096" 00:19:25.941 } 00:19:25.941 } 00:19:25.941 ]' 00:19:25.941 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.199 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:26.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.023 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.280 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.538 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.795 { 00:19:27.795 "cntlid": 123, 00:19:27.795 "qid": 0, 00:19:27.795 "state": "enabled", 00:19:27.795 "thread": "nvmf_tgt_poll_group_000", 00:19:27.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:27.795 "listen_address": { 00:19:27.795 "trtype": "TCP", 00:19:27.795 "adrfam": "IPv4", 00:19:27.795 "traddr": "10.0.0.2", 00:19:27.795 "trsvcid": "4420" 00:19:27.795 }, 00:19:27.795 "peer_address": { 00:19:27.795 "trtype": "TCP", 00:19:27.795 "adrfam": "IPv4", 00:19:27.795 "traddr": "10.0.0.1", 00:19:27.795 "trsvcid": "37492" 00:19:27.795 }, 00:19:27.795 "auth": { 00:19:27.795 "state": "completed", 00:19:27.795 "digest": "sha512", 00:19:27.795 "dhgroup": "ffdhe4096" 00:19:27.795 } 00:19:27.795 } 00:19:27.795 ]' 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.795 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.053 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:28.053 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.619 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.877 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.135 00:19:29.135 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.135 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.135 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.393 { 00:19:29.393 "cntlid": 125, 00:19:29.393 "qid": 0, 00:19:29.393 "state": "enabled", 00:19:29.393 "thread": "nvmf_tgt_poll_group_000", 00:19:29.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:29.393 "listen_address": { 00:19:29.393 "trtype": "TCP", 00:19:29.393 "adrfam": "IPv4", 00:19:29.393 "traddr": "10.0.0.2", 00:19:29.393 "trsvcid": "4420" 00:19:29.393 }, 00:19:29.393 "peer_address": { 00:19:29.393 "trtype": "TCP", 00:19:29.393 "adrfam": "IPv4", 00:19:29.393 "traddr": "10.0.0.1", 00:19:29.393 "trsvcid": "37516" 00:19:29.393 }, 00:19:29.393 "auth": { 00:19:29.393 "state": "completed", 00:19:29.393 "digest": "sha512", 00:19:29.393 "dhgroup": "ffdhe4096" 00:19:29.393 } 00:19:29.393 } 00:19:29.393 ]' 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.393 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.650 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:29.650 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:30.215 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.215 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.473 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.730 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.730 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.987 { 00:19:30.987 "cntlid": 127, 00:19:30.987 "qid": 0, 00:19:30.987 "state": "enabled", 00:19:30.987 "thread": "nvmf_tgt_poll_group_000", 00:19:30.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:30.987 "listen_address": { 00:19:30.987 "trtype": "TCP", 00:19:30.987 "adrfam": "IPv4", 00:19:30.987 "traddr": "10.0.0.2", 00:19:30.987 "trsvcid": "4420" 00:19:30.987 }, 00:19:30.987 "peer_address": { 00:19:30.987 "trtype": "TCP", 00:19:30.987 "adrfam": "IPv4", 00:19:30.987 "traddr": "10.0.0.1", 00:19:30.987 "trsvcid": "37544" 00:19:30.987 }, 00:19:30.987 "auth": { 00:19:30.987 "state": "completed", 00:19:30.987 "digest": "sha512", 00:19:30.987 "dhgroup": "ffdhe4096" 00:19:30.987 } 00:19:30.987 } 00:19:30.987 ]' 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.987 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.244 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:31.244 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.809 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.067 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.323 00:19:32.323 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.323 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.324 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.662 { 00:19:32.662 "cntlid": 129, 00:19:32.662 "qid": 0, 00:19:32.662 "state": "enabled", 00:19:32.662 "thread": "nvmf_tgt_poll_group_000", 00:19:32.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:32.662 "listen_address": { 00:19:32.662 "trtype": "TCP", 00:19:32.662 "adrfam": "IPv4", 00:19:32.662 "traddr": "10.0.0.2", 00:19:32.662 "trsvcid": "4420" 00:19:32.662 }, 00:19:32.662 "peer_address": { 00:19:32.662 "trtype": "TCP", 00:19:32.662 "adrfam": "IPv4", 00:19:32.662 "traddr": "10.0.0.1", 00:19:32.662 "trsvcid": "37568" 00:19:32.662 }, 00:19:32.662 "auth": { 00:19:32.662 "state": "completed", 00:19:32.662 "digest": "sha512", 00:19:32.662 "dhgroup": "ffdhe6144" 00:19:32.662 } 00:19:32.662 } 00:19:32.662 ]' 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.662 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.663 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.663 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.663 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.663 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.941 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:32.941 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.507 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.765 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.023 00:19:34.023 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.023 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.023 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.281 { 00:19:34.281 "cntlid": 131, 00:19:34.281 "qid": 0, 00:19:34.281 "state": "enabled", 00:19:34.281 "thread": "nvmf_tgt_poll_group_000", 00:19:34.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:34.281 "listen_address": { 00:19:34.281 "trtype": "TCP", 00:19:34.281 "adrfam": "IPv4", 00:19:34.281 "traddr": "10.0.0.2", 00:19:34.281 "trsvcid": "4420" 00:19:34.281 }, 00:19:34.281 "peer_address": { 00:19:34.281 "trtype": "TCP", 00:19:34.281 "adrfam": "IPv4", 00:19:34.281 "traddr": "10.0.0.1", 00:19:34.281 "trsvcid": "37586" 00:19:34.281 }, 00:19:34.281 "auth": { 00:19:34.281 "state": "completed", 00:19:34.281 "digest": "sha512", 00:19:34.281 "dhgroup": "ffdhe6144" 00:19:34.281 } 00:19:34.281 } 00:19:34.281 ]' 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.281 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.539 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:34.539 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.105 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.364 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.622 00:19:35.622 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.622 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.622 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.880 { 00:19:35.880 "cntlid": 133, 00:19:35.880 "qid": 0, 00:19:35.880 "state": "enabled", 00:19:35.880 "thread": "nvmf_tgt_poll_group_000", 00:19:35.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:35.880 "listen_address": { 00:19:35.880 "trtype": "TCP", 00:19:35.880 "adrfam": "IPv4", 00:19:35.880 "traddr": "10.0.0.2", 00:19:35.880 "trsvcid": "4420" 00:19:35.880 }, 00:19:35.880 "peer_address": { 00:19:35.880 "trtype": "TCP", 00:19:35.880 "adrfam": "IPv4", 00:19:35.880 "traddr": "10.0.0.1", 00:19:35.880 "trsvcid": "37622" 00:19:35.880 }, 00:19:35.880 "auth": { 00:19:35.880 "state": "completed", 00:19:35.880 "digest": "sha512", 00:19:35.880 "dhgroup": "ffdhe6144" 00:19:35.880 } 00:19:35.880 } 00:19:35.880 ]' 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.880 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.139 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.139 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.139 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.139 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:36.139 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.705 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.964 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.531 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.531 { 00:19:37.531 "cntlid": 135, 00:19:37.531 "qid": 0, 00:19:37.531 "state": "enabled", 00:19:37.531 "thread": "nvmf_tgt_poll_group_000", 00:19:37.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:37.531 "listen_address": { 00:19:37.531 "trtype": "TCP", 00:19:37.531 "adrfam": "IPv4", 00:19:37.531 "traddr": "10.0.0.2", 00:19:37.531 "trsvcid": "4420" 00:19:37.531 }, 00:19:37.531 "peer_address": { 00:19:37.531 "trtype": "TCP", 00:19:37.531 "adrfam": "IPv4", 00:19:37.531 "traddr": "10.0.0.1", 00:19:37.531 "trsvcid": "45114" 00:19:37.531 }, 00:19:37.531 "auth": { 00:19:37.531 "state": "completed", 00:19:37.531 "digest": "sha512", 00:19:37.531 "dhgroup": "ffdhe6144" 00:19:37.531 } 00:19:37.531 } 00:19:37.531 ]' 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.531 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.789 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.789 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.789 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.789 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.789 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.047 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:38.047 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.613 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.179 00:19:39.179 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.179 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.179 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.436 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.436 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.436 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.436 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.436 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.437 { 00:19:39.437 "cntlid": 137, 00:19:39.437 "qid": 0, 00:19:39.437 "state": "enabled", 00:19:39.437 "thread": "nvmf_tgt_poll_group_000", 00:19:39.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:39.437 "listen_address": { 00:19:39.437 "trtype": "TCP", 00:19:39.437 "adrfam": "IPv4", 00:19:39.437 "traddr": "10.0.0.2", 00:19:39.437 "trsvcid": "4420" 00:19:39.437 }, 00:19:39.437 "peer_address": { 00:19:39.437 "trtype": "TCP", 00:19:39.437 "adrfam": "IPv4", 00:19:39.437 "traddr": "10.0.0.1", 00:19:39.437 "trsvcid": "45142" 00:19:39.437 }, 00:19:39.437 "auth": { 00:19:39.437 "state": "completed", 00:19:39.437 "digest": "sha512", 00:19:39.437 "dhgroup": "ffdhe8192" 00:19:39.437 } 00:19:39.437 } 00:19:39.437 ]' 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.437 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.695 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:39.695 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:40.260 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.261 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.518 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:40.518 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.518 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.519 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.084 00:19:41.084 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.084 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.084 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.343 { 00:19:41.343 "cntlid": 139, 00:19:41.343 "qid": 0, 00:19:41.343 "state": "enabled", 00:19:41.343 "thread": "nvmf_tgt_poll_group_000", 00:19:41.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:41.343 "listen_address": { 00:19:41.343 "trtype": "TCP", 00:19:41.343 "adrfam": "IPv4", 00:19:41.343 "traddr": "10.0.0.2", 00:19:41.343 "trsvcid": "4420" 00:19:41.343 }, 00:19:41.343 "peer_address": { 00:19:41.343 "trtype": "TCP", 00:19:41.343 "adrfam": "IPv4", 00:19:41.343 "traddr": "10.0.0.1", 00:19:41.343 "trsvcid": "45164" 00:19:41.343 }, 00:19:41.343 "auth": { 00:19:41.343 "state": "completed", 00:19:41.343 "digest": "sha512", 00:19:41.343 "dhgroup": "ffdhe8192" 00:19:41.343 } 00:19:41.343 } 00:19:41.343 ]' 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.343 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.601 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:41.601 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: --dhchap-ctrl-secret DHHC-1:02:NWIwMzQ1OGEzNmQxODlmNWZlZjEyN2UyNmM4NzliZGM1MzI4NjFiYmFmZDVmNDg1474txw==: 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.166 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.424 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.998 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.998 { 00:19:42.998 "cntlid": 141, 00:19:42.998 "qid": 0, 00:19:42.998 "state": "enabled", 00:19:42.998 "thread": "nvmf_tgt_poll_group_000", 00:19:42.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:42.998 "listen_address": { 00:19:42.998 "trtype": "TCP", 00:19:42.998 "adrfam": "IPv4", 00:19:42.998 "traddr": "10.0.0.2", 00:19:42.998 "trsvcid": "4420" 00:19:42.998 }, 00:19:42.998 "peer_address": { 00:19:42.998 "trtype": "TCP", 00:19:42.998 "adrfam": "IPv4", 00:19:42.998 "traddr": "10.0.0.1", 00:19:42.998 "trsvcid": "45186" 00:19:42.998 }, 00:19:42.998 "auth": { 00:19:42.998 "state": "completed", 00:19:42.998 "digest": "sha512", 00:19:42.998 "dhgroup": "ffdhe8192" 00:19:42.998 } 00:19:42.998 } 00:19:42.998 ]' 00:19:42.998 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.257 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.257 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.257 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.257 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.257 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.257 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.257 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.515 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:43.515 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:01:ZmZmMzZiNDZkZDQ0YmU2YjBhYzgzYjY3ZjhiYWU0NDSA//aG: 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.081 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.081 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.339 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.339 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.339 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.339 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.597 00:19:44.597 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.597 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.597 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.855 { 00:19:44.855 "cntlid": 143, 00:19:44.855 "qid": 0, 00:19:44.855 "state": "enabled", 00:19:44.855 "thread": "nvmf_tgt_poll_group_000", 00:19:44.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:44.855 "listen_address": { 00:19:44.855 "trtype": "TCP", 00:19:44.855 "adrfam": "IPv4", 00:19:44.855 "traddr": "10.0.0.2", 00:19:44.855 "trsvcid": "4420" 00:19:44.855 }, 00:19:44.855 "peer_address": { 00:19:44.855 "trtype": "TCP", 00:19:44.855 "adrfam": "IPv4", 00:19:44.855 "traddr": "10.0.0.1", 00:19:44.855 "trsvcid": "45222" 00:19:44.855 }, 00:19:44.855 "auth": { 00:19:44.855 "state": "completed", 00:19:44.855 "digest": "sha512", 00:19:44.855 "dhgroup": "ffdhe8192" 00:19:44.855 } 00:19:44.855 } 00:19:44.855 ]' 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.855 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.113 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.113 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.113 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.113 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.113 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.371 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:45.371 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.936 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.194 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.195 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.195 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.195 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.195 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.453 00:19:46.453 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.453 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.453 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.717 { 00:19:46.717 "cntlid": 145, 00:19:46.717 "qid": 0, 00:19:46.717 "state": "enabled", 00:19:46.717 "thread": "nvmf_tgt_poll_group_000", 00:19:46.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:46.717 "listen_address": { 00:19:46.717 "trtype": "TCP", 00:19:46.717 "adrfam": "IPv4", 00:19:46.717 "traddr": "10.0.0.2", 00:19:46.717 "trsvcid": "4420" 00:19:46.717 }, 00:19:46.717 "peer_address": { 00:19:46.717 "trtype": "TCP", 00:19:46.717 "adrfam": "IPv4", 00:19:46.717 "traddr": "10.0.0.1", 00:19:46.717 "trsvcid": "45264" 00:19:46.717 }, 00:19:46.717 "auth": { 00:19:46.717 "state": "completed", 00:19:46.717 "digest": "sha512", 00:19:46.717 "dhgroup": "ffdhe8192" 00:19:46.717 } 00:19:46.717 } 00:19:46.717 ]' 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.717 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.975 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.975 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.975 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.975 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.975 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.233 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:47.233 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzRiNjJhYTQxNjY0MzA2MDg1MTdmNzcxYzM0NmNjYjNjMzgyOTY0MTc0M2FiNmM5KDCskQ==: --dhchap-ctrl-secret DHHC-1:03:ODgxMjA5Mzc5ZjQ3YWFjZDJmNTYxN2RhMGFlYmY0NTVkMmNlMGUwZTg0MGE1MGY2NTFjZjU1NDhlNTI5ZDg2OEVENdE=: 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:47.797 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:48.055 request: 00:19:48.055 { 00:19:48.055 "name": "nvme0", 00:19:48.055 "trtype": "tcp", 00:19:48.055 "traddr": "10.0.0.2", 00:19:48.055 "adrfam": "ipv4", 00:19:48.055 "trsvcid": "4420", 00:19:48.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:48.055 "prchk_reftag": false, 00:19:48.055 "prchk_guard": false, 00:19:48.055 "hdgst": false, 00:19:48.055 "ddgst": false, 00:19:48.055 "dhchap_key": "key2", 00:19:48.055 "allow_unrecognized_csi": false, 00:19:48.055 "method": "bdev_nvme_attach_controller", 00:19:48.055 "req_id": 1 00:19:48.055 } 00:19:48.055 Got JSON-RPC error response 00:19:48.055 response: 00:19:48.055 { 00:19:48.055 "code": -5, 00:19:48.055 "message": "Input/output error" 00:19:48.055 } 00:19:48.055 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.313 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.571 request: 00:19:48.571 { 00:19:48.571 "name": "nvme0", 00:19:48.571 "trtype": "tcp", 00:19:48.571 "traddr": "10.0.0.2", 00:19:48.571 "adrfam": "ipv4", 00:19:48.571 "trsvcid": "4420", 00:19:48.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:48.571 "prchk_reftag": false, 00:19:48.571 "prchk_guard": false, 00:19:48.571 "hdgst": false, 00:19:48.571 "ddgst": false, 00:19:48.571 "dhchap_key": "key1", 00:19:48.571 "dhchap_ctrlr_key": "ckey2", 00:19:48.571 "allow_unrecognized_csi": false, 00:19:48.571 "method": "bdev_nvme_attach_controller", 00:19:48.571 "req_id": 1 00:19:48.571 } 00:19:48.571 Got JSON-RPC error response 00:19:48.571 response: 00:19:48.571 { 00:19:48.571 "code": -5, 00:19:48.571 "message": "Input/output error" 00:19:48.571 } 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.571 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.138 request: 00:19:49.138 { 00:19:49.138 "name": "nvme0", 00:19:49.138 "trtype": "tcp", 00:19:49.138 "traddr": "10.0.0.2", 00:19:49.138 "adrfam": "ipv4", 00:19:49.138 "trsvcid": "4420", 00:19:49.138 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:49.138 "prchk_reftag": false, 00:19:49.138 "prchk_guard": false, 00:19:49.138 "hdgst": false, 00:19:49.138 "ddgst": false, 00:19:49.138 "dhchap_key": "key1", 00:19:49.138 "dhchap_ctrlr_key": "ckey1", 00:19:49.138 "allow_unrecognized_csi": false, 00:19:49.138 "method": "bdev_nvme_attach_controller", 00:19:49.138 "req_id": 1 00:19:49.138 } 00:19:49.138 Got JSON-RPC error response 00:19:49.138 response: 00:19:49.138 { 00:19:49.138 "code": -5, 00:19:49.138 "message": "Input/output error" 00:19:49.138 } 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 115559 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 115559 ']' 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 115559 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.138 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115559 00:19:49.138 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.138 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.138 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115559' 00:19:49.138 killing process with pid 115559 00:19:49.138 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 115559 00:19:49.138 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 115559 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=137489 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 137489 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 137489 ']' 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.396 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 137489 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 137489 ']' 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.654 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 null0 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jj6 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7vF ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vF 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kAz 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.r5n ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5n 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.m2b 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.tH1 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tH1 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.plO 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.913 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.847 nvme0n1 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.847 { 00:19:50.847 "cntlid": 1, 00:19:50.847 "qid": 0, 00:19:50.847 "state": "enabled", 00:19:50.847 "thread": "nvmf_tgt_poll_group_000", 00:19:50.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:50.847 "listen_address": { 00:19:50.847 "trtype": "TCP", 00:19:50.847 "adrfam": "IPv4", 00:19:50.847 "traddr": "10.0.0.2", 00:19:50.847 "trsvcid": "4420" 00:19:50.847 }, 00:19:50.847 "peer_address": { 00:19:50.847 "trtype": "TCP", 00:19:50.847 "adrfam": "IPv4", 00:19:50.847 "traddr": "10.0.0.1", 00:19:50.847 "trsvcid": "57662" 00:19:50.847 }, 00:19:50.847 "auth": { 00:19:50.847 "state": "completed", 00:19:50.847 "digest": "sha512", 00:19:50.847 "dhgroup": "ffdhe8192" 00:19:50.847 } 00:19:50.847 } 00:19:50.847 ]' 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.847 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.106 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.106 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.106 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.106 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.106 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.364 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:51.364 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.930 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.188 request: 00:19:52.188 { 00:19:52.188 "name": "nvme0", 00:19:52.188 "trtype": "tcp", 00:19:52.188 "traddr": "10.0.0.2", 00:19:52.188 "adrfam": "ipv4", 00:19:52.188 "trsvcid": "4420", 00:19:52.188 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:52.188 "prchk_reftag": false, 00:19:52.188 "prchk_guard": false, 00:19:52.188 "hdgst": false, 00:19:52.188 "ddgst": false, 00:19:52.188 "dhchap_key": "key3", 00:19:52.188 "allow_unrecognized_csi": false, 00:19:52.188 "method": "bdev_nvme_attach_controller", 00:19:52.188 "req_id": 1 00:19:52.188 } 00:19:52.188 Got JSON-RPC error response 00:19:52.188 response: 00:19:52.188 { 00:19:52.188 "code": -5, 00:19:52.188 "message": "Input/output error" 00:19:52.188 } 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:52.188 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.446 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.704 request: 00:19:52.704 { 00:19:52.704 "name": "nvme0", 00:19:52.704 "trtype": "tcp", 00:19:52.704 "traddr": "10.0.0.2", 00:19:52.704 "adrfam": "ipv4", 00:19:52.704 "trsvcid": "4420", 00:19:52.704 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:52.704 "prchk_reftag": false, 00:19:52.704 "prchk_guard": false, 00:19:52.704 "hdgst": false, 00:19:52.704 "ddgst": false, 00:19:52.704 "dhchap_key": "key3", 00:19:52.704 "allow_unrecognized_csi": false, 00:19:52.704 "method": "bdev_nvme_attach_controller", 00:19:52.704 "req_id": 1 00:19:52.704 } 00:19:52.704 Got JSON-RPC error response 00:19:52.704 response: 00:19:52.704 { 00:19:52.704 "code": -5, 00:19:52.704 "message": "Input/output error" 00:19:52.704 } 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.704 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.962 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.220 request: 00:19:53.220 { 00:19:53.220 "name": "nvme0", 00:19:53.220 "trtype": "tcp", 00:19:53.220 "traddr": "10.0.0.2", 00:19:53.220 "adrfam": "ipv4", 00:19:53.220 "trsvcid": "4420", 00:19:53.220 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:53.220 "prchk_reftag": false, 00:19:53.220 "prchk_guard": false, 00:19:53.220 "hdgst": false, 00:19:53.220 "ddgst": false, 00:19:53.220 "dhchap_key": "key0", 00:19:53.220 "dhchap_ctrlr_key": "key1", 00:19:53.220 "allow_unrecognized_csi": false, 00:19:53.220 "method": "bdev_nvme_attach_controller", 00:19:53.220 "req_id": 1 00:19:53.220 } 00:19:53.220 Got JSON-RPC error response 00:19:53.220 response: 00:19:53.220 { 00:19:53.220 "code": -5, 00:19:53.220 "message": "Input/output error" 00:19:53.220 } 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:53.220 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:53.478 nvme0n1 00:19:53.478 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:53.478 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:53.478 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.736 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.736 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.736 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:53.994 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:54.560 nvme0n1 00:19:54.560 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:54.560 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:54.560 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:54.818 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.075 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.075 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:55.075 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: --dhchap-ctrl-secret DHHC-1:03:NWRkYzYxYTliYWQ1MDZiZGJhOTIyM2FkM2ZhOTY1MjMwZTBhN2FiOGMzODFlZDRhNTllNmIyNmI4MTgzYzA5YaYZJPk=: 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.641 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:55.899 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:56.157 request: 00:19:56.157 { 00:19:56.157 "name": "nvme0", 00:19:56.157 "trtype": "tcp", 00:19:56.157 "traddr": "10.0.0.2", 00:19:56.157 "adrfam": "ipv4", 00:19:56.157 "trsvcid": "4420", 00:19:56.157 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:56.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:56.157 "prchk_reftag": false, 00:19:56.157 "prchk_guard": false, 00:19:56.157 "hdgst": false, 00:19:56.157 "ddgst": false, 00:19:56.157 "dhchap_key": "key1", 00:19:56.157 "allow_unrecognized_csi": false, 00:19:56.157 "method": "bdev_nvme_attach_controller", 00:19:56.157 "req_id": 1 00:19:56.157 } 00:19:56.157 Got JSON-RPC error response 00:19:56.157 response: 00:19:56.157 { 00:19:56.157 "code": -5, 00:19:56.157 "message": "Input/output error" 00:19:56.157 } 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:56.157 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:57.091 nvme0n1 00:19:57.091 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:57.091 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:57.091 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.091 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.091 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.091 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:57.349 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:57.606 nvme0n1 00:19:57.606 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:57.606 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:57.606 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.864 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.864 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.864 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: '' 2s 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: ]] 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Mjg2YjM5MThjNGNhNTU1OGIwMmM4YTUyNzkxNDY5ZWXaPzlj: 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:58.123 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: 2s 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: ]] 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OGU4Nzc4ZWY5ZTQyNjcxNzU2ZjMxNmY0NmNjZGY2NWE1OTMxZjFlNmYwZWY1NDg1oGO2Ug==: 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:00.022 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.549 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:02.549 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:02.549 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:02.807 nvme0n1 00:20:03.065 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:03.065 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.065 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.065 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.065 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:03.065 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:03.321 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:03.321 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:03.321 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:03.578 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:03.835 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:03.835 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:03.835 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.093 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.657 request: 00:20:04.657 { 00:20:04.657 "name": "nvme0", 00:20:04.657 "dhchap_key": "key1", 00:20:04.657 "dhchap_ctrlr_key": "key3", 00:20:04.657 "method": "bdev_nvme_set_keys", 00:20:04.657 "req_id": 1 00:20:04.657 } 00:20:04.657 Got JSON-RPC error response 00:20:04.657 response: 00:20:04.657 { 00:20:04.657 "code": -13, 00:20:04.657 "message": "Permission denied" 00:20:04.657 } 00:20:04.657 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:04.657 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.657 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.657 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.658 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:04.658 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:04.658 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.658 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:04.658 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:05.591 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:05.591 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:05.591 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:05.849 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:06.782 nvme0n1 00:20:06.782 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:06.782 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.782 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.782 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.782 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:06.782 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:06.783 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:07.041 request: 00:20:07.041 { 00:20:07.041 "name": "nvme0", 00:20:07.041 "dhchap_key": "key2", 00:20:07.041 "dhchap_ctrlr_key": "key0", 00:20:07.041 "method": "bdev_nvme_set_keys", 00:20:07.041 "req_id": 1 00:20:07.041 } 00:20:07.041 Got JSON-RPC error response 00:20:07.041 response: 00:20:07.041 { 00:20:07.041 "code": -13, 00:20:07.041 "message": "Permission denied" 00:20:07.041 } 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:07.041 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.299 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:07.299 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:08.232 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:08.232 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:08.232 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 115648 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 115648 ']' 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 115648 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115648 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115648' 00:20:08.490 killing process with pid 115648 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 115648 00:20:08.490 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 115648 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.056 rmmod nvme_tcp 00:20:09.056 rmmod nvme_fabrics 00:20:09.056 rmmod nvme_keyring 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 137489 ']' 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 137489 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 137489 ']' 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 137489 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137489 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137489' 00:20:09.056 killing process with pid 137489 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 137489 00:20:09.056 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 137489 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.056 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.315 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.315 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.315 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.315 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.315 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Jj6 /tmp/spdk.key-sha256.kAz /tmp/spdk.key-sha384.m2b /tmp/spdk.key-sha512.plO /tmp/spdk.key-sha512.7vF /tmp/spdk.key-sha384.r5n /tmp/spdk.key-sha256.tH1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:11.220 00:20:11.220 real 2m33.938s 00:20:11.220 user 5m52.304s 00:20:11.220 sys 0m24.744s 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.220 ************************************ 00:20:11.220 END TEST nvmf_auth_target 00:20:11.220 ************************************ 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:11.220 ************************************ 00:20:11.220 START TEST nvmf_bdevio_no_huge 00:20:11.220 ************************************ 00:20:11.220 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:11.480 * Looking for test storage... 00:20:11.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.480 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:11.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.481 --rc genhtml_branch_coverage=1 00:20:11.481 --rc genhtml_function_coverage=1 00:20:11.481 --rc genhtml_legend=1 00:20:11.481 --rc geninfo_all_blocks=1 00:20:11.481 --rc geninfo_unexecuted_blocks=1 00:20:11.481 00:20:11.481 ' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:11.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.481 --rc genhtml_branch_coverage=1 00:20:11.481 --rc genhtml_function_coverage=1 00:20:11.481 --rc genhtml_legend=1 00:20:11.481 --rc geninfo_all_blocks=1 00:20:11.481 --rc geninfo_unexecuted_blocks=1 00:20:11.481 00:20:11.481 ' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:11.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.481 --rc genhtml_branch_coverage=1 00:20:11.481 --rc genhtml_function_coverage=1 00:20:11.481 --rc genhtml_legend=1 00:20:11.481 --rc geninfo_all_blocks=1 00:20:11.481 --rc geninfo_unexecuted_blocks=1 00:20:11.481 00:20:11.481 ' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:11.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.481 --rc genhtml_branch_coverage=1 00:20:11.481 --rc genhtml_function_coverage=1 00:20:11.481 --rc genhtml_legend=1 00:20:11.481 --rc geninfo_all_blocks=1 00:20:11.481 --rc geninfo_unexecuted_blocks=1 00:20:11.481 00:20:11.481 ' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.481 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.047 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:18.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:18.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:18.048 Found net devices under 0000:af:00.0: cvl_0_0 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:18.048 Found net devices under 0000:af:00.1: cvl_0_1 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.048 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:18.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:20:18.307 00:20:18.307 --- 10.0.0.2 ping statistics --- 00:20:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.307 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:20:18.307 00:20:18.307 --- 10.0.0.1 ping statistics --- 00:20:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.307 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.307 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=144877 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 144877 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 144877 ']' 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.308 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.308 [2024-12-10 05:45:36.150472] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:18.308 [2024-12-10 05:45:36.150525] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:18.308 [2024-12-10 05:45:36.240804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.566 [2024-12-10 05:45:36.287006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.566 [2024-12-10 05:45:36.287037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.566 [2024-12-10 05:45:36.287044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.566 [2024-12-10 05:45:36.287050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.566 [2024-12-10 05:45:36.287056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.566 [2024-12-10 05:45:36.288205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:18.566 [2024-12-10 05:45:36.288334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:18.567 [2024-12-10 05:45:36.288442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.567 [2024-12-10 05:45:36.288442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:19.134 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.134 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:19.134 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.134 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.134 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 [2024-12-10 05:45:37.040703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 Malloc0 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 [2024-12-10 05:45:37.076963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.134 { 00:20:19.134 "params": { 00:20:19.134 "name": "Nvme$subsystem", 00:20:19.134 "trtype": "$TEST_TRANSPORT", 00:20:19.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.134 "adrfam": "ipv4", 00:20:19.134 "trsvcid": "$NVMF_PORT", 00:20:19.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.134 "hdgst": ${hdgst:-false}, 00:20:19.134 "ddgst": ${ddgst:-false} 00:20:19.134 }, 00:20:19.134 "method": "bdev_nvme_attach_controller" 00:20:19.134 } 00:20:19.134 EOF 00:20:19.134 )") 00:20:19.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:19.392 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:19.392 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:19.392 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:19.392 "params": { 00:20:19.392 "name": "Nvme1", 00:20:19.392 "trtype": "tcp", 00:20:19.392 "traddr": "10.0.0.2", 00:20:19.392 "adrfam": "ipv4", 00:20:19.392 "trsvcid": "4420", 00:20:19.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.392 "hdgst": false, 00:20:19.392 "ddgst": false 00:20:19.392 }, 00:20:19.392 "method": "bdev_nvme_attach_controller" 00:20:19.392 }' 00:20:19.392 [2024-12-10 05:45:37.128603] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:19.392 [2024-12-10 05:45:37.128649] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid144925 ] 00:20:19.392 [2024-12-10 05:45:37.211884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.392 [2024-12-10 05:45:37.259868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.392 [2024-12-10 05:45:37.259974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.392 [2024-12-10 05:45:37.259974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.650 I/O targets: 00:20:19.650 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:19.650 00:20:19.650 00:20:19.650 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.650 http://cunit.sourceforge.net/ 00:20:19.650 00:20:19.650 00:20:19.650 Suite: bdevio tests on: Nvme1n1 00:20:19.650 Test: blockdev write read block ...passed 00:20:19.650 Test: blockdev write zeroes read block ...passed 00:20:19.650 Test: blockdev write zeroes read no split ...passed 00:20:19.650 Test: blockdev write zeroes read split ...passed 00:20:19.650 Test: blockdev write zeroes read split partial ...passed 00:20:19.650 Test: blockdev reset ...[2024-12-10 05:45:37.588036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:19.650 [2024-12-10 05:45:37.588100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fff00 (9): Bad file descriptor 00:20:19.907 [2024-12-10 05:45:37.642903] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:19.907 passed 00:20:19.907 Test: blockdev write read 8 blocks ...passed 00:20:19.907 Test: blockdev write read size > 128k ...passed 00:20:19.907 Test: blockdev write read invalid size ...passed 00:20:19.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.907 Test: blockdev write read max offset ...passed 00:20:19.907 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.907 Test: blockdev writev readv 8 blocks ...passed 00:20:19.907 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.907 Test: blockdev writev readv block ...passed 00:20:19.907 Test: blockdev writev readv size > 128k ...passed 00:20:19.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.907 Test: blockdev comparev and writev ...[2024-12-10 05:45:37.856948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.907 [2024-12-10 05:45:37.856976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.907 [2024-12-10 05:45:37.856989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.907 [2024-12-10 05:45:37.856997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.907 [2024-12-10 05:45:37.857228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.907 [2024-12-10 05:45:37.857238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.907 [2024-12-10 05:45:37.857250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.907 [2024-12-10 05:45:37.857258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.907 [2024-12-10 05:45:37.857487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.907 [2024-12-10 05:45:37.857500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.907 [2024-12-10 05:45:37.857512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.908 [2024-12-10 05:45:37.857519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.908 [2024-12-10 05:45:37.857755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.908 [2024-12-10 05:45:37.857772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.908 [2024-12-10 05:45:37.857785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.908 [2024-12-10 05:45:37.857792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:20.200 passed 00:20:20.200 Test: blockdev nvme passthru rw ...passed 00:20:20.200 Test: blockdev nvme passthru vendor specific ...[2024-12-10 05:45:37.940511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.200 [2024-12-10 05:45:37.940531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:20.200 [2024-12-10 05:45:37.940636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.200 [2024-12-10 05:45:37.940645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:20.200 [2024-12-10 05:45:37.940753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.200 [2024-12-10 05:45:37.940762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:20.200 [2024-12-10 05:45:37.940859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.200 [2024-12-10 05:45:37.940868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:20.200 passed 00:20:20.200 Test: blockdev nvme admin passthru ...passed 00:20:20.200 Test: blockdev copy ...passed 00:20:20.200 00:20:20.200 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.200 suites 1 1 n/a 0 0 00:20:20.200 tests 23 23 23 0 0 00:20:20.200 asserts 152 152 152 0 n/a 00:20:20.200 00:20:20.200 Elapsed time = 1.162 seconds 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.553 rmmod nvme_tcp 00:20:20.553 rmmod nvme_fabrics 00:20:20.553 rmmod nvme_keyring 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 144877 ']' 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 144877 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 144877 ']' 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 144877 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144877 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144877' 00:20:20.553 killing process with pid 144877 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 144877 00:20:20.553 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 144877 00:20:20.825 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.825 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.825 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.825 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:20.825 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.826 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.359 00:20:23.359 real 0m11.607s 00:20:23.359 user 0m13.408s 00:20:23.359 sys 0m5.981s 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:23.359 ************************************ 00:20:23.359 END TEST nvmf_bdevio_no_huge 00:20:23.359 ************************************ 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.359 ************************************ 00:20:23.359 START TEST nvmf_tls 00:20:23.359 ************************************ 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.359 * Looking for test storage... 00:20:23.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.359 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:23.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.359 --rc genhtml_branch_coverage=1 00:20:23.359 --rc genhtml_function_coverage=1 00:20:23.359 --rc genhtml_legend=1 00:20:23.359 --rc geninfo_all_blocks=1 00:20:23.359 --rc geninfo_unexecuted_blocks=1 00:20:23.359 00:20:23.359 ' 00:20:23.359 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:23.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.359 --rc genhtml_branch_coverage=1 00:20:23.359 --rc genhtml_function_coverage=1 00:20:23.359 --rc genhtml_legend=1 00:20:23.359 --rc geninfo_all_blocks=1 00:20:23.359 --rc geninfo_unexecuted_blocks=1 00:20:23.359 00:20:23.359 ' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:23.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.360 --rc genhtml_branch_coverage=1 00:20:23.360 --rc genhtml_function_coverage=1 00:20:23.360 --rc genhtml_legend=1 00:20:23.360 --rc geninfo_all_blocks=1 00:20:23.360 --rc geninfo_unexecuted_blocks=1 00:20:23.360 00:20:23.360 ' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:23.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.360 --rc genhtml_branch_coverage=1 00:20:23.360 --rc genhtml_function_coverage=1 00:20:23.360 --rc genhtml_legend=1 00:20:23.360 --rc geninfo_all_blocks=1 00:20:23.360 --rc geninfo_unexecuted_blocks=1 00:20:23.360 00:20:23.360 ' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.360 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:29.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:29.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:29.926 Found net devices under 0000:af:00.0: cvl_0_0 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.926 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:29.927 Found net devices under 0000:af:00.1: cvl_0_1 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:29.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:20:29.927 00:20:29.927 --- 10.0.0.2 ping statistics --- 00:20:29.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.927 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:20:29.927 00:20:29.927 --- 10.0.0.1 ping statistics --- 00:20:29.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.927 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=149173 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 149173 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 149173 ']' 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.927 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.927 [2024-12-10 05:45:47.874045] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:29.927 [2024-12-10 05:45:47.874088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.186 [2024-12-10 05:45:47.956297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.186 [2024-12-10 05:45:47.994253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.186 [2024-12-10 05:45:47.994291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.186 [2024-12-10 05:45:47.994298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.186 [2024-12-10 05:45:47.994304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.186 [2024-12-10 05:45:47.994309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.186 [2024-12-10 05:45:47.994845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.752 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.752 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.752 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:31.011 true 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.011 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:31.270 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:31.270 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:31.270 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:31.529 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.529 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:31.788 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:31.788 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:31.788 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:31.788 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.788 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:32.047 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:32.047 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:32.047 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.047 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:32.306 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:32.306 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:32.306 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:32.306 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.306 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:32.564 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:32.564 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:32.564 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:32.823 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.823 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:33.081 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.aiG4XYFVuk 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.8i8Yo0Gtop 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aiG4XYFVuk 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.8i8Yo0Gtop 00:20:33.082 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:33.340 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:33.599 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.aiG4XYFVuk 00:20:33.599 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aiG4XYFVuk 00:20:33.599 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.599 [2024-12-10 05:45:51.506304] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.599 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:33.857 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.115 [2024-12-10 05:45:51.883271] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.115 [2024-12-10 05:45:51.883523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.115 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.373 malloc0 00:20:34.373 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.373 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aiG4XYFVuk 00:20:34.631 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.889 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aiG4XYFVuk 00:20:44.853 Initializing NVMe Controllers 00:20:44.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.853 Initialization complete. Launching workers. 00:20:44.853 ======================================================== 00:20:44.853 Latency(us) 00:20:44.853 Device Information : IOPS MiB/s Average min max 00:20:44.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16834.90 65.76 3801.73 811.65 4748.74 00:20:44.853 ======================================================== 00:20:44.853 Total : 16834.90 65.76 3801.73 811.65 4748.74 00:20:44.853 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiG4XYFVuk 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aiG4XYFVuk 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.853 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=151604 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 151604 /var/tmp/bdevperf.sock 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 151604 ']' 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.854 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.854 [2024-12-10 05:46:02.791954] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:44.854 [2024-12-10 05:46:02.792001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151604 ] 00:20:45.112 [2024-12-10 05:46:02.872389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.112 [2024-12-10 05:46:02.913118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.112 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.112 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:45.112 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aiG4XYFVuk 00:20:45.369 05:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.627 [2024-12-10 05:46:03.369409] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.627 TLSTESTn1 00:20:45.627 05:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:45.627 Running I/O for 10 seconds... 00:20:47.931 5426.00 IOPS, 21.20 MiB/s [2024-12-10T04:46:06.823Z] 5400.00 IOPS, 21.09 MiB/s [2024-12-10T04:46:07.756Z] 5444.00 IOPS, 21.27 MiB/s [2024-12-10T04:46:08.688Z] 5498.00 IOPS, 21.48 MiB/s [2024-12-10T04:46:09.620Z] 5522.40 IOPS, 21.57 MiB/s [2024-12-10T04:46:10.992Z] 5550.00 IOPS, 21.68 MiB/s [2024-12-10T04:46:11.925Z] 5538.29 IOPS, 21.63 MiB/s [2024-12-10T04:46:12.858Z] 5560.25 IOPS, 21.72 MiB/s [2024-12-10T04:46:13.791Z] 5536.56 IOPS, 21.63 MiB/s [2024-12-10T04:46:13.791Z] 5486.20 IOPS, 21.43 MiB/s 00:20:55.832 Latency(us) 00:20:55.832 [2024-12-10T04:46:13.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.832 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:55.832 Verification LBA range: start 0x0 length 0x2000 00:20:55.832 TLSTESTn1 : 10.02 5489.86 21.44 0.00 0.00 23280.29 6241.52 27962.03 00:20:55.832 [2024-12-10T04:46:13.791Z] =================================================================================================================== 00:20:55.832 [2024-12-10T04:46:13.791Z] Total : 5489.86 21.44 0.00 0.00 23280.29 6241.52 27962.03 00:20:55.832 { 00:20:55.832 "results": [ 00:20:55.832 { 00:20:55.832 "job": "TLSTESTn1", 00:20:55.832 "core_mask": "0x4", 00:20:55.832 "workload": "verify", 00:20:55.832 "status": "finished", 00:20:55.832 "verify_range": { 00:20:55.832 "start": 0, 00:20:55.832 "length": 8192 00:20:55.832 }, 00:20:55.832 "queue_depth": 128, 00:20:55.832 "io_size": 4096, 00:20:55.832 "runtime": 10.016289, 00:20:55.832 "iops": 5489.857571002593, 00:20:55.832 "mibps": 21.44475613672888, 00:20:55.832 "io_failed": 0, 00:20:55.833 "io_timeout": 0, 00:20:55.833 "avg_latency_us": 23280.292154201612, 00:20:55.833 "min_latency_us": 6241.523809523809, 00:20:55.833 "max_latency_us": 27962.02666666667 00:20:55.833 } 00:20:55.833 ], 00:20:55.833 "core_count": 1 00:20:55.833 } 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 151604 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 151604 ']' 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 151604 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 151604 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 151604' 00:20:55.833 killing process with pid 151604 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 151604 00:20:55.833 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.833 00:20:55.833 Latency(us) 00:20:55.833 [2024-12-10T04:46:13.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.833 [2024-12-10T04:46:13.792Z] =================================================================================================================== 00:20:55.833 [2024-12-10T04:46:13.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.833 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 151604 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8i8Yo0Gtop 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8i8Yo0Gtop 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8i8Yo0Gtop 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8i8Yo0Gtop 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=153313 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 153313 /var/tmp/bdevperf.sock 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 153313 ']' 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.091 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.091 [2024-12-10 05:46:13.864114] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:56.091 [2024-12-10 05:46:13.864160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153313 ] 00:20:56.091 [2024-12-10 05:46:13.930521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.091 [2024-12-10 05:46:13.967105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.348 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.348 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.348 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8i8Yo0Gtop 00:20:56.348 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:56.606 [2024-12-10 05:46:14.442832] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.606 [2024-12-10 05:46:14.452195] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:56.606 [2024-12-10 05:46:14.453131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x722770 (107): Transport endpoint is not connected 00:20:56.606 [2024-12-10 05:46:14.454124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x722770 (9): Bad file descriptor 00:20:56.606 [2024-12-10 05:46:14.455125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:56.606 [2024-12-10 05:46:14.455137] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:56.606 [2024-12-10 05:46:14.455144] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:56.606 [2024-12-10 05:46:14.455156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:56.606 request: 00:20:56.606 { 00:20:56.606 "name": "TLSTEST", 00:20:56.606 "trtype": "tcp", 00:20:56.606 "traddr": "10.0.0.2", 00:20:56.606 "adrfam": "ipv4", 00:20:56.606 "trsvcid": "4420", 00:20:56.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.606 "prchk_reftag": false, 00:20:56.606 "prchk_guard": false, 00:20:56.606 "hdgst": false, 00:20:56.606 "ddgst": false, 00:20:56.606 "psk": "key0", 00:20:56.606 "allow_unrecognized_csi": false, 00:20:56.606 "method": "bdev_nvme_attach_controller", 00:20:56.606 "req_id": 1 00:20:56.606 } 00:20:56.606 Got JSON-RPC error response 00:20:56.606 response: 00:20:56.606 { 00:20:56.606 "code": -5, 00:20:56.606 "message": "Input/output error" 00:20:56.606 } 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 153313 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 153313 ']' 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 153313 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153313 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153313' 00:20:56.606 killing process with pid 153313 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 153313 00:20:56.606 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.606 00:20:56.606 Latency(us) 00:20:56.606 [2024-12-10T04:46:14.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.606 [2024-12-10T04:46:14.565Z] =================================================================================================================== 00:20:56.606 [2024-12-10T04:46:14.565Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.606 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 153313 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aiG4XYFVuk 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aiG4XYFVuk 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aiG4XYFVuk 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aiG4XYFVuk 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=153545 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 153545 /var/tmp/bdevperf.sock 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 153545 ']' 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.864 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.864 [2024-12-10 05:46:14.734287] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:56.864 [2024-12-10 05:46:14.734338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153545 ] 00:20:56.864 [2024-12-10 05:46:14.811604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.122 [2024-12-10 05:46:14.848300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.122 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.122 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.122 05:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aiG4XYFVuk 00:20:57.381 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:57.381 [2024-12-10 05:46:15.303338] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.381 [2024-12-10 05:46:15.314278] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:57.381 [2024-12-10 05:46:15.314301] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:57.381 [2024-12-10 05:46:15.314324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:57.381 [2024-12-10 05:46:15.314583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d770 (107): Transport endpoint is not connected 00:20:57.381 [2024-12-10 05:46:15.315576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d770 (9): Bad file descriptor 00:20:57.381 [2024-12-10 05:46:15.316578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:57.381 [2024-12-10 05:46:15.316593] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:57.381 [2024-12-10 05:46:15.316601] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:57.381 [2024-12-10 05:46:15.316609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:57.381 request: 00:20:57.381 { 00:20:57.381 "name": "TLSTEST", 00:20:57.381 "trtype": "tcp", 00:20:57.381 "traddr": "10.0.0.2", 00:20:57.381 "adrfam": "ipv4", 00:20:57.381 "trsvcid": "4420", 00:20:57.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:57.381 "prchk_reftag": false, 00:20:57.381 "prchk_guard": false, 00:20:57.381 "hdgst": false, 00:20:57.381 "ddgst": false, 00:20:57.381 "psk": "key0", 00:20:57.381 "allow_unrecognized_csi": false, 00:20:57.381 "method": "bdev_nvme_attach_controller", 00:20:57.381 "req_id": 1 00:20:57.381 } 00:20:57.381 Got JSON-RPC error response 00:20:57.381 response: 00:20:57.381 { 00:20:57.381 "code": -5, 00:20:57.381 "message": "Input/output error" 00:20:57.381 } 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 153545 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 153545 ']' 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 153545 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153545 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153545' 00:20:57.640 killing process with pid 153545 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 153545 00:20:57.640 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.640 00:20:57.640 Latency(us) 00:20:57.640 [2024-12-10T04:46:15.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.640 [2024-12-10T04:46:15.599Z] =================================================================================================================== 00:20:57.640 [2024-12-10T04:46:15.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 153545 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiG4XYFVuk 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiG4XYFVuk 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiG4XYFVuk 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aiG4XYFVuk 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=153702 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 153702 /var/tmp/bdevperf.sock 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 153702 ']' 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.640 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.898 [2024-12-10 05:46:15.602127] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:57.898 [2024-12-10 05:46:15.602174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153702 ] 00:20:57.898 [2024-12-10 05:46:15.680432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.898 [2024-12-10 05:46:15.718712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.898 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.898 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.898 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aiG4XYFVuk 00:20:58.156 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:58.412 [2024-12-10 05:46:16.186398] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.412 [2024-12-10 05:46:16.196543] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:58.412 [2024-12-10 05:46:16.196568] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:58.412 [2024-12-10 05:46:16.196590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:58.412 [2024-12-10 05:46:16.196644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef0770 (107): Transport endpoint is not connected 00:20:58.412 [2024-12-10 05:46:16.197637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef0770 (9): Bad file descriptor 00:20:58.412 [2024-12-10 05:46:16.198638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:58.412 [2024-12-10 05:46:16.198649] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:58.412 [2024-12-10 05:46:16.198657] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:58.412 [2024-12-10 05:46:16.198664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:58.412 request: 00:20:58.412 { 00:20:58.412 "name": "TLSTEST", 00:20:58.412 "trtype": "tcp", 00:20:58.412 "traddr": "10.0.0.2", 00:20:58.412 "adrfam": "ipv4", 00:20:58.412 "trsvcid": "4420", 00:20:58.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.412 "prchk_reftag": false, 00:20:58.412 "prchk_guard": false, 00:20:58.412 "hdgst": false, 00:20:58.412 "ddgst": false, 00:20:58.412 "psk": "key0", 00:20:58.412 "allow_unrecognized_csi": false, 00:20:58.412 "method": "bdev_nvme_attach_controller", 00:20:58.412 "req_id": 1 00:20:58.412 } 00:20:58.412 Got JSON-RPC error response 00:20:58.412 response: 00:20:58.412 { 00:20:58.412 "code": -5, 00:20:58.412 "message": "Input/output error" 00:20:58.412 } 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 153702 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 153702 ']' 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 153702 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153702 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153702' 00:20:58.412 killing process with pid 153702 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 153702 00:20:58.412 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.412 00:20:58.412 Latency(us) 00:20:58.412 [2024-12-10T04:46:16.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.412 [2024-12-10T04:46:16.371Z] =================================================================================================================== 00:20:58.412 [2024-12-10T04:46:16.371Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.412 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 153702 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=153789 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 153789 /var/tmp/bdevperf.sock 00:20:58.669 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 153789 ']' 00:20:58.670 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.670 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.670 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.670 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.670 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.670 [2024-12-10 05:46:16.467323] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:58.670 [2024-12-10 05:46:16.467368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153789 ] 00:20:58.670 [2024-12-10 05:46:16.538313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.670 [2024-12-10 05:46:16.578710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.927 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.927 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:58.927 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:58.927 [2024-12-10 05:46:16.849724] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:58.927 [2024-12-10 05:46:16.849751] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:58.927 request: 00:20:58.927 { 00:20:58.927 "name": "key0", 00:20:58.927 "path": "", 00:20:58.927 "method": "keyring_file_add_key", 00:20:58.927 "req_id": 1 00:20:58.927 } 00:20:58.927 Got JSON-RPC error response 00:20:58.927 response: 00:20:58.927 { 00:20:58.927 "code": -1, 00:20:58.927 "message": "Operation not permitted" 00:20:58.927 } 00:20:58.927 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.185 [2024-12-10 05:46:17.042344] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.185 [2024-12-10 05:46:17.042386] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:59.185 request: 00:20:59.185 { 00:20:59.185 "name": "TLSTEST", 00:20:59.185 "trtype": "tcp", 00:20:59.185 "traddr": "10.0.0.2", 00:20:59.185 "adrfam": "ipv4", 00:20:59.185 "trsvcid": "4420", 00:20:59.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.185 "prchk_reftag": false, 00:20:59.185 "prchk_guard": false, 00:20:59.185 "hdgst": false, 00:20:59.185 "ddgst": false, 00:20:59.185 "psk": "key0", 00:20:59.185 "allow_unrecognized_csi": false, 00:20:59.185 "method": "bdev_nvme_attach_controller", 00:20:59.185 "req_id": 1 00:20:59.185 } 00:20:59.185 Got JSON-RPC error response 00:20:59.185 response: 00:20:59.185 { 00:20:59.185 "code": -126, 00:20:59.185 "message": "Required key not available" 00:20:59.185 } 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 153789 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 153789 ']' 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 153789 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153789 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153789' 00:20:59.185 killing process with pid 153789 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 153789 00:20:59.185 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.185 00:20:59.185 Latency(us) 00:20:59.185 [2024-12-10T04:46:17.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.185 [2024-12-10T04:46:17.144Z] =================================================================================================================== 00:20:59.185 [2024-12-10T04:46:17.144Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.185 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 153789 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 149173 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 149173 ']' 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 149173 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 149173 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 149173' 00:20:59.443 killing process with pid 149173 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 149173 00:20:59.443 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 149173 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ovQJZlz7JE 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ovQJZlz7JE 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=154028 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 154028 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 154028 ']' 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.702 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.702 [2024-12-10 05:46:17.560203] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:20:59.702 [2024-12-10 05:46:17.560257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.702 [2024-12-10 05:46:17.642855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.960 [2024-12-10 05:46:17.679336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.960 [2024-12-10 05:46:17.679369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.960 [2024-12-10 05:46:17.679376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.960 [2024-12-10 05:46:17.679382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.960 [2024-12-10 05:46:17.679387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.960 [2024-12-10 05:46:17.679879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ovQJZlz7JE 00:20:59.960 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ovQJZlz7JE 00:20:59.961 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:00.219 [2024-12-10 05:46:17.991685] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.219 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.477 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.477 [2024-12-10 05:46:18.368658] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.477 [2024-12-10 05:46:18.368876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.477 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.735 malloc0 00:21:00.735 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.994 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:00.994 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ovQJZlz7JE 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ovQJZlz7JE 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=154287 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 154287 /var/tmp/bdevperf.sock 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 154287 ']' 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.252 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.252 [2024-12-10 05:46:19.112114] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:01.252 [2024-12-10 05:46:19.112159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154287 ] 00:21:01.252 [2024-12-10 05:46:19.190719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.510 [2024-12-10 05:46:19.230052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.510 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.510 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:01.510 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:01.767 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:01.767 [2024-12-10 05:46:19.701130] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.024 TLSTESTn1 00:21:02.024 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.024 Running I/O for 10 seconds... 00:21:04.328 5381.00 IOPS, 21.02 MiB/s [2024-12-10T04:46:23.219Z] 5506.50 IOPS, 21.51 MiB/s [2024-12-10T04:46:24.151Z] 5516.00 IOPS, 21.55 MiB/s [2024-12-10T04:46:25.084Z] 5526.50 IOPS, 21.59 MiB/s [2024-12-10T04:46:26.016Z] 5543.20 IOPS, 21.65 MiB/s [2024-12-10T04:46:26.948Z] 5512.67 IOPS, 21.53 MiB/s [2024-12-10T04:46:28.320Z] 5471.86 IOPS, 21.37 MiB/s [2024-12-10T04:46:29.275Z] 5381.00 IOPS, 21.02 MiB/s [2024-12-10T04:46:30.264Z] 5332.78 IOPS, 20.83 MiB/s [2024-12-10T04:46:30.264Z] 5253.00 IOPS, 20.52 MiB/s 00:21:12.305 Latency(us) 00:21:12.305 [2024-12-10T04:46:30.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.305 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.305 Verification LBA range: start 0x0 length 0x2000 00:21:12.305 TLSTESTn1 : 10.03 5251.56 20.51 0.00 0.00 24326.61 5804.62 30708.30 00:21:12.305 [2024-12-10T04:46:30.264Z] =================================================================================================================== 00:21:12.305 [2024-12-10T04:46:30.264Z] Total : 5251.56 20.51 0.00 0.00 24326.61 5804.62 30708.30 00:21:12.305 { 00:21:12.305 "results": [ 00:21:12.305 { 00:21:12.305 "job": "TLSTESTn1", 00:21:12.305 "core_mask": "0x4", 00:21:12.305 "workload": "verify", 00:21:12.305 "status": "finished", 00:21:12.305 "verify_range": { 00:21:12.305 "start": 0, 00:21:12.305 "length": 8192 00:21:12.305 }, 00:21:12.305 "queue_depth": 128, 00:21:12.305 "io_size": 4096, 00:21:12.305 "runtime": 10.026927, 00:21:12.305 "iops": 5251.559126739428, 00:21:12.305 "mibps": 20.513902838825892, 00:21:12.305 "io_failed": 0, 00:21:12.305 "io_timeout": 0, 00:21:12.305 "avg_latency_us": 24326.607818252356, 00:21:12.305 "min_latency_us": 5804.617142857142, 00:21:12.305 "max_latency_us": 30708.297142857144 00:21:12.305 } 00:21:12.305 ], 00:21:12.305 "core_count": 1 00:21:12.306 } 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 154287 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 154287 ']' 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 154287 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.306 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154287 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154287' 00:21:12.306 killing process with pid 154287 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 154287 00:21:12.306 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.306 00:21:12.306 Latency(us) 00:21:12.306 [2024-12-10T04:46:30.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.306 [2024-12-10T04:46:30.265Z] =================================================================================================================== 00:21:12.306 [2024-12-10T04:46:30.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 154287 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ovQJZlz7JE 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ovQJZlz7JE 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ovQJZlz7JE 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ovQJZlz7JE 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ovQJZlz7JE 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=156102 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 156102 /var/tmp/bdevperf.sock 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 156102 ']' 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.306 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.306 [2024-12-10 05:46:30.229751] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:12.306 [2024-12-10 05:46:30.229802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156102 ] 00:21:12.564 [2024-12-10 05:46:30.307915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.564 [2024-12-10 05:46:30.343938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.564 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.564 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:12.564 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:12.821 [2024-12-10 05:46:30.618928] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ovQJZlz7JE': 0100666 00:21:12.821 [2024-12-10 05:46:30.618961] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:12.821 request: 00:21:12.821 { 00:21:12.821 "name": "key0", 00:21:12.821 "path": "/tmp/tmp.ovQJZlz7JE", 00:21:12.821 "method": "keyring_file_add_key", 00:21:12.821 "req_id": 1 00:21:12.821 } 00:21:12.821 Got JSON-RPC error response 00:21:12.821 response: 00:21:12.821 { 00:21:12.821 "code": -1, 00:21:12.822 "message": "Operation not permitted" 00:21:12.822 } 00:21:12.822 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:13.079 [2024-12-10 05:46:30.819530] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.079 [2024-12-10 05:46:30.819566] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:13.079 request: 00:21:13.079 { 00:21:13.079 "name": "TLSTEST", 00:21:13.079 "trtype": "tcp", 00:21:13.079 "traddr": "10.0.0.2", 00:21:13.079 "adrfam": "ipv4", 00:21:13.079 "trsvcid": "4420", 00:21:13.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.079 "prchk_reftag": false, 00:21:13.079 "prchk_guard": false, 00:21:13.079 "hdgst": false, 00:21:13.079 "ddgst": false, 00:21:13.079 "psk": "key0", 00:21:13.079 "allow_unrecognized_csi": false, 00:21:13.079 "method": "bdev_nvme_attach_controller", 00:21:13.079 "req_id": 1 00:21:13.079 } 00:21:13.079 Got JSON-RPC error response 00:21:13.079 response: 00:21:13.079 { 00:21:13.079 "code": -126, 00:21:13.079 "message": "Required key not available" 00:21:13.079 } 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 156102 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 156102 ']' 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 156102 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156102 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:13.079 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156102' 00:21:13.079 killing process with pid 156102 00:21:13.080 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 156102 00:21:13.080 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.080 00:21:13.080 Latency(us) 00:21:13.080 [2024-12-10T04:46:31.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.080 [2024-12-10T04:46:31.039Z] =================================================================================================================== 00:21:13.080 [2024-12-10T04:46:31.039Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.080 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 156102 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 154028 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 154028 ']' 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 154028 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154028 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154028' 00:21:13.338 killing process with pid 154028 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 154028 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 154028 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=156341 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 156341 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 156341 ']' 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.338 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.596 [2024-12-10 05:46:31.326748] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:13.596 [2024-12-10 05:46:31.326794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.596 [2024-12-10 05:46:31.407448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.596 [2024-12-10 05:46:31.441145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.596 [2024-12-10 05:46:31.441181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.596 [2024-12-10 05:46:31.441188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.596 [2024-12-10 05:46:31.441194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.596 [2024-12-10 05:46:31.441204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.596 [2024-12-10 05:46:31.441758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.596 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.596 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.596 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.596 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.596 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ovQJZlz7JE 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ovQJZlz7JE 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ovQJZlz7JE 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ovQJZlz7JE 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:13.854 [2024-12-10 05:46:31.757550] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.854 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:14.113 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:14.371 [2024-12-10 05:46:32.150540] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.371 [2024-12-10 05:46:32.150757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.371 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.629 malloc0 00:21:14.629 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:14.887 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:14.887 [2024-12-10 05:46:32.760230] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ovQJZlz7JE': 0100666 00:21:14.887 [2024-12-10 05:46:32.760274] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:14.887 request: 00:21:14.887 { 00:21:14.887 "name": "key0", 00:21:14.887 "path": "/tmp/tmp.ovQJZlz7JE", 00:21:14.887 "method": "keyring_file_add_key", 00:21:14.887 "req_id": 1 00:21:14.887 } 00:21:14.887 Got JSON-RPC error response 00:21:14.887 response: 00:21:14.887 { 00:21:14.887 "code": -1, 00:21:14.887 "message": "Operation not permitted" 00:21:14.887 } 00:21:14.887 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.146 [2024-12-10 05:46:32.944733] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:15.146 [2024-12-10 05:46:32.944774] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:15.146 request: 00:21:15.146 { 00:21:15.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.146 "host": "nqn.2016-06.io.spdk:host1", 00:21:15.146 "psk": "key0", 00:21:15.146 "method": "nvmf_subsystem_add_host", 00:21:15.146 "req_id": 1 00:21:15.146 } 00:21:15.146 Got JSON-RPC error response 00:21:15.146 response: 00:21:15.146 { 00:21:15.146 "code": -32603, 00:21:15.146 "message": "Internal error" 00:21:15.146 } 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 156341 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 156341 ']' 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 156341 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.146 05:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156341 00:21:15.146 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:15.146 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:15.146 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156341' 00:21:15.146 killing process with pid 156341 00:21:15.146 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 156341 00:21:15.146 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 156341 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ovQJZlz7JE 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=156609 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 156609 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 156609 ']' 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.405 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.405 [2024-12-10 05:46:33.257120] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:15.405 [2024-12-10 05:46:33.257166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.405 [2024-12-10 05:46:33.339835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.664 [2024-12-10 05:46:33.375151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.664 [2024-12-10 05:46:33.375183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.664 [2024-12-10 05:46:33.375190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.664 [2024-12-10 05:46:33.375196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.664 [2024-12-10 05:46:33.375203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.664 [2024-12-10 05:46:33.375747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ovQJZlz7JE 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ovQJZlz7JE 00:21:16.231 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.489 [2024-12-10 05:46:34.288756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.489 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.747 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.747 [2024-12-10 05:46:34.673732] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.747 [2024-12-10 05:46:34.673942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.005 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:17.005 malloc0 00:21:17.005 05:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:17.264 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:17.521 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=157077 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 157077 /var/tmp/bdevperf.sock 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 157077 ']' 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.778 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.779 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.779 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.779 [2024-12-10 05:46:35.522283] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:17.779 [2024-12-10 05:46:35.522330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157077 ] 00:21:17.779 [2024-12-10 05:46:35.598687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.779 [2024-12-10 05:46:35.637525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.036 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.036 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.036 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:18.036 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.293 [2024-12-10 05:46:36.089151] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.293 TLSTESTn1 00:21:18.293 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:18.551 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:18.551 "subsystems": [ 00:21:18.551 { 00:21:18.551 "subsystem": "keyring", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "keyring_file_add_key", 00:21:18.551 "params": { 00:21:18.551 "name": "key0", 00:21:18.551 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:18.551 } 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "iobuf", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "iobuf_set_options", 00:21:18.551 "params": { 00:21:18.551 "small_pool_count": 8192, 00:21:18.551 "large_pool_count": 1024, 00:21:18.551 "small_bufsize": 8192, 00:21:18.551 "large_bufsize": 135168, 00:21:18.551 "enable_numa": false 00:21:18.551 } 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "sock", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "sock_set_default_impl", 00:21:18.551 "params": { 00:21:18.551 "impl_name": "posix" 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "sock_impl_set_options", 00:21:18.551 "params": { 00:21:18.551 "impl_name": "ssl", 00:21:18.551 "recv_buf_size": 4096, 00:21:18.551 "send_buf_size": 4096, 00:21:18.551 "enable_recv_pipe": true, 00:21:18.551 "enable_quickack": false, 00:21:18.551 "enable_placement_id": 0, 00:21:18.551 "enable_zerocopy_send_server": true, 00:21:18.551 "enable_zerocopy_send_client": false, 00:21:18.551 "zerocopy_threshold": 0, 00:21:18.551 "tls_version": 0, 00:21:18.551 "enable_ktls": false 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "sock_impl_set_options", 00:21:18.551 "params": { 00:21:18.551 "impl_name": "posix", 00:21:18.551 "recv_buf_size": 2097152, 00:21:18.551 "send_buf_size": 2097152, 00:21:18.551 "enable_recv_pipe": true, 00:21:18.551 "enable_quickack": false, 00:21:18.551 "enable_placement_id": 0, 00:21:18.551 "enable_zerocopy_send_server": true, 00:21:18.551 "enable_zerocopy_send_client": false, 00:21:18.551 "zerocopy_threshold": 0, 00:21:18.551 "tls_version": 0, 00:21:18.551 "enable_ktls": false 00:21:18.551 } 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "vmd", 00:21:18.551 "config": [] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "accel", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "accel_set_options", 00:21:18.551 "params": { 00:21:18.551 "small_cache_size": 128, 00:21:18.551 "large_cache_size": 16, 00:21:18.551 "task_count": 2048, 00:21:18.551 "sequence_count": 2048, 00:21:18.551 "buf_count": 2048 00:21:18.551 } 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "bdev", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "bdev_set_options", 00:21:18.551 "params": { 00:21:18.551 "bdev_io_pool_size": 65535, 00:21:18.551 "bdev_io_cache_size": 256, 00:21:18.551 "bdev_auto_examine": true, 00:21:18.551 "iobuf_small_cache_size": 128, 00:21:18.551 "iobuf_large_cache_size": 16 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "bdev_raid_set_options", 00:21:18.551 "params": { 00:21:18.551 "process_window_size_kb": 1024, 00:21:18.551 "process_max_bandwidth_mb_sec": 0 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "bdev_iscsi_set_options", 00:21:18.551 "params": { 00:21:18.551 "timeout_sec": 30 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "bdev_nvme_set_options", 00:21:18.551 "params": { 00:21:18.551 "action_on_timeout": "none", 00:21:18.551 "timeout_us": 0, 00:21:18.551 "timeout_admin_us": 0, 00:21:18.551 "keep_alive_timeout_ms": 10000, 00:21:18.551 "arbitration_burst": 0, 00:21:18.551 "low_priority_weight": 0, 00:21:18.551 "medium_priority_weight": 0, 00:21:18.551 "high_priority_weight": 0, 00:21:18.551 "nvme_adminq_poll_period_us": 10000, 00:21:18.551 "nvme_ioq_poll_period_us": 0, 00:21:18.551 "io_queue_requests": 0, 00:21:18.551 "delay_cmd_submit": true, 00:21:18.551 "transport_retry_count": 4, 00:21:18.551 "bdev_retry_count": 3, 00:21:18.551 "transport_ack_timeout": 0, 00:21:18.551 "ctrlr_loss_timeout_sec": 0, 00:21:18.551 "reconnect_delay_sec": 0, 00:21:18.551 "fast_io_fail_timeout_sec": 0, 00:21:18.551 "disable_auto_failback": false, 00:21:18.551 "generate_uuids": false, 00:21:18.551 "transport_tos": 0, 00:21:18.551 "nvme_error_stat": false, 00:21:18.551 "rdma_srq_size": 0, 00:21:18.551 "io_path_stat": false, 00:21:18.551 "allow_accel_sequence": false, 00:21:18.551 "rdma_max_cq_size": 0, 00:21:18.551 "rdma_cm_event_timeout_ms": 0, 00:21:18.551 "dhchap_digests": [ 00:21:18.551 "sha256", 00:21:18.551 "sha384", 00:21:18.551 "sha512" 00:21:18.551 ], 00:21:18.551 "dhchap_dhgroups": [ 00:21:18.551 "null", 00:21:18.551 "ffdhe2048", 00:21:18.551 "ffdhe3072", 00:21:18.551 "ffdhe4096", 00:21:18.551 "ffdhe6144", 00:21:18.551 "ffdhe8192" 00:21:18.551 ], 00:21:18.551 "rdma_umr_per_io": false 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "bdev_nvme_set_hotplug", 00:21:18.551 "params": { 00:21:18.551 "period_us": 100000, 00:21:18.551 "enable": false 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "bdev_malloc_create", 00:21:18.551 "params": { 00:21:18.551 "name": "malloc0", 00:21:18.551 "num_blocks": 8192, 00:21:18.551 "block_size": 4096, 00:21:18.551 "physical_block_size": 4096, 00:21:18.551 "uuid": "9c89b5f0-144f-4093-bf44-d8e7ec6b5d66", 00:21:18.551 "optimal_io_boundary": 0, 00:21:18.551 "md_size": 0, 00:21:18.551 "dif_type": 0, 00:21:18.551 "dif_is_head_of_md": false, 00:21:18.551 "dif_pi_format": 0 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "bdev_wait_for_examine" 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "nbd", 00:21:18.551 "config": [] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "scheduler", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "framework_set_scheduler", 00:21:18.551 "params": { 00:21:18.551 "name": "static" 00:21:18.551 } 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "subsystem": "nvmf", 00:21:18.551 "config": [ 00:21:18.551 { 00:21:18.551 "method": "nvmf_set_config", 00:21:18.551 "params": { 00:21:18.551 "discovery_filter": "match_any", 00:21:18.551 "admin_cmd_passthru": { 00:21:18.551 "identify_ctrlr": false 00:21:18.551 }, 00:21:18.551 "dhchap_digests": [ 00:21:18.551 "sha256", 00:21:18.551 "sha384", 00:21:18.551 "sha512" 00:21:18.551 ], 00:21:18.551 "dhchap_dhgroups": [ 00:21:18.551 "null", 00:21:18.551 "ffdhe2048", 00:21:18.551 "ffdhe3072", 00:21:18.551 "ffdhe4096", 00:21:18.551 "ffdhe6144", 00:21:18.551 "ffdhe8192" 00:21:18.551 ] 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_set_max_subsystems", 00:21:18.551 "params": { 00:21:18.551 "max_subsystems": 1024 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_set_crdt", 00:21:18.551 "params": { 00:21:18.551 "crdt1": 0, 00:21:18.551 "crdt2": 0, 00:21:18.551 "crdt3": 0 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_create_transport", 00:21:18.551 "params": { 00:21:18.551 "trtype": "TCP", 00:21:18.551 "max_queue_depth": 128, 00:21:18.551 "max_io_qpairs_per_ctrlr": 127, 00:21:18.551 "in_capsule_data_size": 4096, 00:21:18.551 "max_io_size": 131072, 00:21:18.551 "io_unit_size": 131072, 00:21:18.551 "max_aq_depth": 128, 00:21:18.551 "num_shared_buffers": 511, 00:21:18.551 "buf_cache_size": 4294967295, 00:21:18.551 "dif_insert_or_strip": false, 00:21:18.551 "zcopy": false, 00:21:18.551 "c2h_success": false, 00:21:18.551 "sock_priority": 0, 00:21:18.551 "abort_timeout_sec": 1, 00:21:18.551 "ack_timeout": 0, 00:21:18.551 "data_wr_pool_size": 0 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_create_subsystem", 00:21:18.551 "params": { 00:21:18.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.551 "allow_any_host": false, 00:21:18.551 "serial_number": "SPDK00000000000001", 00:21:18.551 "model_number": "SPDK bdev Controller", 00:21:18.551 "max_namespaces": 10, 00:21:18.551 "min_cntlid": 1, 00:21:18.551 "max_cntlid": 65519, 00:21:18.551 "ana_reporting": false 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_subsystem_add_host", 00:21:18.551 "params": { 00:21:18.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.551 "host": "nqn.2016-06.io.spdk:host1", 00:21:18.551 "psk": "key0" 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_subsystem_add_ns", 00:21:18.551 "params": { 00:21:18.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.551 "namespace": { 00:21:18.551 "nsid": 1, 00:21:18.551 "bdev_name": "malloc0", 00:21:18.551 "nguid": "9C89B5F0144F4093BF44D8E7EC6B5D66", 00:21:18.551 "uuid": "9c89b5f0-144f-4093-bf44-d8e7ec6b5d66", 00:21:18.551 "no_auto_visible": false 00:21:18.551 } 00:21:18.551 } 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "method": "nvmf_subsystem_add_listener", 00:21:18.551 "params": { 00:21:18.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.551 "listen_address": { 00:21:18.551 "trtype": "TCP", 00:21:18.551 "adrfam": "IPv4", 00:21:18.551 "traddr": "10.0.0.2", 00:21:18.551 "trsvcid": "4420" 00:21:18.551 }, 00:21:18.551 "secure_channel": true 00:21:18.551 } 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }' 00:21:18.551 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:18.809 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:18.809 "subsystems": [ 00:21:18.809 { 00:21:18.809 "subsystem": "keyring", 00:21:18.809 "config": [ 00:21:18.809 { 00:21:18.809 "method": "keyring_file_add_key", 00:21:18.809 "params": { 00:21:18.809 "name": "key0", 00:21:18.809 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:18.809 } 00:21:18.809 } 00:21:18.809 ] 00:21:18.809 }, 00:21:18.809 { 00:21:18.809 "subsystem": "iobuf", 00:21:18.809 "config": [ 00:21:18.809 { 00:21:18.809 "method": "iobuf_set_options", 00:21:18.809 "params": { 00:21:18.809 "small_pool_count": 8192, 00:21:18.809 "large_pool_count": 1024, 00:21:18.809 "small_bufsize": 8192, 00:21:18.809 "large_bufsize": 135168, 00:21:18.809 "enable_numa": false 00:21:18.809 } 00:21:18.809 } 00:21:18.809 ] 00:21:18.809 }, 00:21:18.809 { 00:21:18.809 "subsystem": "sock", 00:21:18.809 "config": [ 00:21:18.809 { 00:21:18.809 "method": "sock_set_default_impl", 00:21:18.809 "params": { 00:21:18.809 "impl_name": "posix" 00:21:18.809 } 00:21:18.809 }, 00:21:18.809 { 00:21:18.809 "method": "sock_impl_set_options", 00:21:18.809 "params": { 00:21:18.809 "impl_name": "ssl", 00:21:18.809 "recv_buf_size": 4096, 00:21:18.809 "send_buf_size": 4096, 00:21:18.809 "enable_recv_pipe": true, 00:21:18.809 "enable_quickack": false, 00:21:18.809 "enable_placement_id": 0, 00:21:18.809 "enable_zerocopy_send_server": true, 00:21:18.809 "enable_zerocopy_send_client": false, 00:21:18.809 "zerocopy_threshold": 0, 00:21:18.809 "tls_version": 0, 00:21:18.809 "enable_ktls": false 00:21:18.809 } 00:21:18.809 }, 00:21:18.809 { 00:21:18.809 "method": "sock_impl_set_options", 00:21:18.809 "params": { 00:21:18.809 "impl_name": "posix", 00:21:18.809 "recv_buf_size": 2097152, 00:21:18.809 "send_buf_size": 2097152, 00:21:18.809 "enable_recv_pipe": true, 00:21:18.809 "enable_quickack": false, 00:21:18.809 "enable_placement_id": 0, 00:21:18.809 "enable_zerocopy_send_server": true, 00:21:18.809 "enable_zerocopy_send_client": false, 00:21:18.809 "zerocopy_threshold": 0, 00:21:18.809 "tls_version": 0, 00:21:18.809 "enable_ktls": false 00:21:18.809 } 00:21:18.809 } 00:21:18.809 ] 00:21:18.809 }, 00:21:18.809 { 00:21:18.810 "subsystem": "vmd", 00:21:18.810 "config": [] 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "subsystem": "accel", 00:21:18.810 "config": [ 00:21:18.810 { 00:21:18.810 "method": "accel_set_options", 00:21:18.810 "params": { 00:21:18.810 "small_cache_size": 128, 00:21:18.810 "large_cache_size": 16, 00:21:18.810 "task_count": 2048, 00:21:18.810 "sequence_count": 2048, 00:21:18.810 "buf_count": 2048 00:21:18.810 } 00:21:18.810 } 00:21:18.810 ] 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "subsystem": "bdev", 00:21:18.810 "config": [ 00:21:18.810 { 00:21:18.810 "method": "bdev_set_options", 00:21:18.810 "params": { 00:21:18.810 "bdev_io_pool_size": 65535, 00:21:18.810 "bdev_io_cache_size": 256, 00:21:18.810 "bdev_auto_examine": true, 00:21:18.810 "iobuf_small_cache_size": 128, 00:21:18.810 "iobuf_large_cache_size": 16 00:21:18.810 } 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "method": "bdev_raid_set_options", 00:21:18.810 "params": { 00:21:18.810 "process_window_size_kb": 1024, 00:21:18.810 "process_max_bandwidth_mb_sec": 0 00:21:18.810 } 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "method": "bdev_iscsi_set_options", 00:21:18.810 "params": { 00:21:18.810 "timeout_sec": 30 00:21:18.810 } 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "method": "bdev_nvme_set_options", 00:21:18.810 "params": { 00:21:18.810 "action_on_timeout": "none", 00:21:18.810 "timeout_us": 0, 00:21:18.810 "timeout_admin_us": 0, 00:21:18.810 "keep_alive_timeout_ms": 10000, 00:21:18.810 "arbitration_burst": 0, 00:21:18.810 "low_priority_weight": 0, 00:21:18.810 "medium_priority_weight": 0, 00:21:18.810 "high_priority_weight": 0, 00:21:18.810 "nvme_adminq_poll_period_us": 10000, 00:21:18.810 "nvme_ioq_poll_period_us": 0, 00:21:18.810 "io_queue_requests": 512, 00:21:18.810 "delay_cmd_submit": true, 00:21:18.810 "transport_retry_count": 4, 00:21:18.810 "bdev_retry_count": 3, 00:21:18.810 "transport_ack_timeout": 0, 00:21:18.810 "ctrlr_loss_timeout_sec": 0, 00:21:18.810 "reconnect_delay_sec": 0, 00:21:18.810 "fast_io_fail_timeout_sec": 0, 00:21:18.810 "disable_auto_failback": false, 00:21:18.810 "generate_uuids": false, 00:21:18.810 "transport_tos": 0, 00:21:18.810 "nvme_error_stat": false, 00:21:18.810 "rdma_srq_size": 0, 00:21:18.810 "io_path_stat": false, 00:21:18.810 "allow_accel_sequence": false, 00:21:18.810 "rdma_max_cq_size": 0, 00:21:18.810 "rdma_cm_event_timeout_ms": 0, 00:21:18.810 "dhchap_digests": [ 00:21:18.810 "sha256", 00:21:18.810 "sha384", 00:21:18.810 "sha512" 00:21:18.810 ], 00:21:18.810 "dhchap_dhgroups": [ 00:21:18.810 "null", 00:21:18.810 "ffdhe2048", 00:21:18.810 "ffdhe3072", 00:21:18.810 "ffdhe4096", 00:21:18.810 "ffdhe6144", 00:21:18.810 "ffdhe8192" 00:21:18.810 ], 00:21:18.810 "rdma_umr_per_io": false 00:21:18.810 } 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "method": "bdev_nvme_attach_controller", 00:21:18.810 "params": { 00:21:18.810 "name": "TLSTEST", 00:21:18.810 "trtype": "TCP", 00:21:18.810 "adrfam": "IPv4", 00:21:18.810 "traddr": "10.0.0.2", 00:21:18.810 "trsvcid": "4420", 00:21:18.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.810 "prchk_reftag": false, 00:21:18.810 "prchk_guard": false, 00:21:18.810 "ctrlr_loss_timeout_sec": 0, 00:21:18.810 "reconnect_delay_sec": 0, 00:21:18.810 "fast_io_fail_timeout_sec": 0, 00:21:18.810 "psk": "key0", 00:21:18.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.810 "hdgst": false, 00:21:18.810 "ddgst": false, 00:21:18.810 "multipath": "multipath" 00:21:18.810 } 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "method": "bdev_nvme_set_hotplug", 00:21:18.810 "params": { 00:21:18.810 "period_us": 100000, 00:21:18.810 "enable": false 00:21:18.810 } 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "method": "bdev_wait_for_examine" 00:21:18.810 } 00:21:18.810 ] 00:21:18.810 }, 00:21:18.810 { 00:21:18.810 "subsystem": "nbd", 00:21:18.810 "config": [] 00:21:18.810 } 00:21:18.810 ] 00:21:18.810 }' 00:21:18.810 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 157077 00:21:18.810 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 157077 ']' 00:21:18.810 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 157077 00:21:18.810 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.810 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.810 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157077 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157077' 00:21:19.069 killing process with pid 157077 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 157077 00:21:19.069 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.069 00:21:19.069 Latency(us) 00:21:19.069 [2024-12-10T04:46:37.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.069 [2024-12-10T04:46:37.028Z] =================================================================================================================== 00:21:19.069 [2024-12-10T04:46:37.028Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 157077 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 156609 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 156609 ']' 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 156609 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156609 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156609' 00:21:19.069 killing process with pid 156609 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 156609 00:21:19.069 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 156609 00:21:19.327 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:19.327 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.327 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.327 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:19.327 "subsystems": [ 00:21:19.327 { 00:21:19.327 "subsystem": "keyring", 00:21:19.327 "config": [ 00:21:19.327 { 00:21:19.327 "method": "keyring_file_add_key", 00:21:19.327 "params": { 00:21:19.327 "name": "key0", 00:21:19.327 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:19.327 } 00:21:19.327 } 00:21:19.327 ] 00:21:19.327 }, 00:21:19.327 { 00:21:19.327 "subsystem": "iobuf", 00:21:19.327 "config": [ 00:21:19.327 { 00:21:19.327 "method": "iobuf_set_options", 00:21:19.327 "params": { 00:21:19.327 "small_pool_count": 8192, 00:21:19.327 "large_pool_count": 1024, 00:21:19.327 "small_bufsize": 8192, 00:21:19.327 "large_bufsize": 135168, 00:21:19.327 "enable_numa": false 00:21:19.327 } 00:21:19.327 } 00:21:19.327 ] 00:21:19.327 }, 00:21:19.327 { 00:21:19.327 "subsystem": "sock", 00:21:19.327 "config": [ 00:21:19.327 { 00:21:19.327 "method": "sock_set_default_impl", 00:21:19.327 "params": { 00:21:19.327 "impl_name": "posix" 00:21:19.327 } 00:21:19.327 }, 00:21:19.327 { 00:21:19.327 "method": "sock_impl_set_options", 00:21:19.327 "params": { 00:21:19.327 "impl_name": "ssl", 00:21:19.327 "recv_buf_size": 4096, 00:21:19.327 "send_buf_size": 4096, 00:21:19.327 "enable_recv_pipe": true, 00:21:19.327 "enable_quickack": false, 00:21:19.327 "enable_placement_id": 0, 00:21:19.327 "enable_zerocopy_send_server": true, 00:21:19.327 "enable_zerocopy_send_client": false, 00:21:19.327 "zerocopy_threshold": 0, 00:21:19.327 "tls_version": 0, 00:21:19.327 "enable_ktls": false 00:21:19.327 } 00:21:19.327 }, 00:21:19.327 { 00:21:19.327 "method": "sock_impl_set_options", 00:21:19.327 "params": { 00:21:19.327 "impl_name": "posix", 00:21:19.327 "recv_buf_size": 2097152, 00:21:19.327 "send_buf_size": 2097152, 00:21:19.327 "enable_recv_pipe": true, 00:21:19.328 "enable_quickack": false, 00:21:19.328 "enable_placement_id": 0, 00:21:19.328 "enable_zerocopy_send_server": true, 00:21:19.328 "enable_zerocopy_send_client": false, 00:21:19.328 "zerocopy_threshold": 0, 00:21:19.328 "tls_version": 0, 00:21:19.328 "enable_ktls": false 00:21:19.328 } 00:21:19.328 } 00:21:19.328 ] 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "subsystem": "vmd", 00:21:19.328 "config": [] 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "subsystem": "accel", 00:21:19.328 "config": [ 00:21:19.328 { 00:21:19.328 "method": "accel_set_options", 00:21:19.328 "params": { 00:21:19.328 "small_cache_size": 128, 00:21:19.328 "large_cache_size": 16, 00:21:19.328 "task_count": 2048, 00:21:19.328 "sequence_count": 2048, 00:21:19.328 "buf_count": 2048 00:21:19.328 } 00:21:19.328 } 00:21:19.328 ] 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "subsystem": "bdev", 00:21:19.328 "config": [ 00:21:19.328 { 00:21:19.328 "method": "bdev_set_options", 00:21:19.328 "params": { 00:21:19.328 "bdev_io_pool_size": 65535, 00:21:19.328 "bdev_io_cache_size": 256, 00:21:19.328 "bdev_auto_examine": true, 00:21:19.328 "iobuf_small_cache_size": 128, 00:21:19.328 "iobuf_large_cache_size": 16 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "bdev_raid_set_options", 00:21:19.328 "params": { 00:21:19.328 "process_window_size_kb": 1024, 00:21:19.328 "process_max_bandwidth_mb_sec": 0 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "bdev_iscsi_set_options", 00:21:19.328 "params": { 00:21:19.328 "timeout_sec": 30 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "bdev_nvme_set_options", 00:21:19.328 "params": { 00:21:19.328 "action_on_timeout": "none", 00:21:19.328 "timeout_us": 0, 00:21:19.328 "timeout_admin_us": 0, 00:21:19.328 "keep_alive_timeout_ms": 10000, 00:21:19.328 "arbitration_burst": 0, 00:21:19.328 "low_priority_weight": 0, 00:21:19.328 "medium_priority_weight": 0, 00:21:19.328 "high_priority_weight": 0, 00:21:19.328 "nvme_adminq_poll_period_us": 10000, 00:21:19.328 "nvme_ioq_poll_period_us": 0, 00:21:19.328 "io_queue_requests": 0, 00:21:19.328 "delay_cmd_submit": true, 00:21:19.328 "transport_retry_count": 4, 00:21:19.328 "bdev_retry_count": 3, 00:21:19.328 "transport_ack_timeout": 0, 00:21:19.328 "ctrlr_loss_timeout_sec": 0, 00:21:19.328 "reconnect_delay_sec": 0, 00:21:19.328 "fast_io_fail_timeout_sec": 0, 00:21:19.328 "disable_auto_failback": false, 00:21:19.328 "generate_uuids": false, 00:21:19.328 "transport_tos": 0, 00:21:19.328 "nvme_error_stat": false, 00:21:19.328 "rdma_srq_size": 0, 00:21:19.328 "io_path_stat": false, 00:21:19.328 "allow_accel_sequence": false, 00:21:19.328 "rdma_max_cq_size": 0, 00:21:19.328 "rdma_cm_event_timeout_ms": 0, 00:21:19.328 "dhchap_digests": [ 00:21:19.328 "sha256", 00:21:19.328 "sha384", 00:21:19.328 "sha512" 00:21:19.328 ], 00:21:19.328 "dhchap_dhgroups": [ 00:21:19.328 "null", 00:21:19.328 "ffdhe2048", 00:21:19.328 "ffdhe3072", 00:21:19.328 "ffdhe4096", 00:21:19.328 "ffdhe6144", 00:21:19.328 "ffdhe8192" 00:21:19.328 ], 00:21:19.328 "rdma_umr_per_io": false 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "bdev_nvme_set_hotplug", 00:21:19.328 "params": { 00:21:19.328 "period_us": 100000, 00:21:19.328 "enable": false 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "bdev_malloc_create", 00:21:19.328 "params": { 00:21:19.328 "name": "malloc0", 00:21:19.328 "num_blocks": 8192, 00:21:19.328 "block_size": 4096, 00:21:19.328 "physical_block_size": 4096, 00:21:19.328 "uuid": "9c89b5f0-144f-4093-bf44-d8e7ec6b5d66", 00:21:19.328 "optimal_io_boundary": 0, 00:21:19.328 "md_size": 0, 00:21:19.328 "dif_type": 0, 00:21:19.328 "dif_is_head_of_md": false, 00:21:19.328 "dif_pi_format": 0 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "bdev_wait_for_examine" 00:21:19.328 } 00:21:19.328 ] 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "subsystem": "nbd", 00:21:19.328 "config": [] 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "subsystem": "scheduler", 00:21:19.328 "config": [ 00:21:19.328 { 00:21:19.328 "method": "framework_set_scheduler", 00:21:19.328 "params": { 00:21:19.328 "name": "static" 00:21:19.328 } 00:21:19.328 } 00:21:19.328 ] 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "subsystem": "nvmf", 00:21:19.328 "config": [ 00:21:19.328 { 00:21:19.328 "method": "nvmf_set_config", 00:21:19.328 "params": { 00:21:19.328 "discovery_filter": "match_any", 00:21:19.328 "admin_cmd_passthru": { 00:21:19.328 "identify_ctrlr": false 00:21:19.328 }, 00:21:19.328 "dhchap_digests": [ 00:21:19.328 "sha256", 00:21:19.328 "sha384", 00:21:19.328 "sha512" 00:21:19.328 ], 00:21:19.328 "dhchap_dhgroups": [ 00:21:19.328 "null", 00:21:19.328 "ffdhe2048", 00:21:19.328 "ffdhe3072", 00:21:19.328 "ffdhe4096", 00:21:19.328 "ffdhe6144", 00:21:19.328 "ffdhe8192" 00:21:19.328 ] 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "nvmf_set_max_subsystems", 00:21:19.328 "params": { 00:21:19.328 "max_subsystems": 1024 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "nvmf_set_crdt", 00:21:19.328 "params": { 00:21:19.328 "crdt1": 0, 00:21:19.328 "crdt2": 0, 00:21:19.328 "crdt3": 0 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "nvmf_create_transport", 00:21:19.328 "params": { 00:21:19.328 "trtype": "TCP", 00:21:19.328 "max_queue_depth": 128, 00:21:19.328 "max_io_qpairs_per_ctrlr": 127, 00:21:19.328 "in_capsule_data_size": 4096, 00:21:19.328 "max_io_size": 131072, 00:21:19.328 "io_unit_size": 131072, 00:21:19.328 "max_aq_depth": 128, 00:21:19.328 "num_shared_buffers": 511, 00:21:19.328 "buf_cache_size": 4294967295, 00:21:19.328 "dif_insert_or_strip": false, 00:21:19.328 "zcopy": false, 00:21:19.328 "c2h_success": false, 00:21:19.328 "sock_priority": 0, 00:21:19.328 "abort_timeout_sec": 1, 00:21:19.328 "ack_timeout": 0, 00:21:19.328 "data_wr_pool_size": 0 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "nvmf_create_subsystem", 00:21:19.328 "params": { 00:21:19.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.328 "allow_any_host": false, 00:21:19.328 "serial_number": "SPDK00000000000001", 00:21:19.328 "model_number": "SPDK bdev Controller", 00:21:19.328 "max_namespaces": 10, 00:21:19.328 "min_cntlid": 1, 00:21:19.328 "max_cntlid": 65519, 00:21:19.328 "ana_reporting": false 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "nvmf_subsystem_add_host", 00:21:19.328 "params": { 00:21:19.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.328 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.328 "psk": "key0" 00:21:19.328 } 00:21:19.328 }, 00:21:19.328 { 00:21:19.328 "method": "nvmf_subsystem_add_ns", 00:21:19.328 "params": { 00:21:19.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.328 "namespace": { 00:21:19.328 "nsid": 1, 00:21:19.328 "bdev_name": "malloc0", 00:21:19.328 "nguid": "9C89B5F0144F4093BF44D8E7EC6B5D66", 00:21:19.328 "uuid": "9c89b5f0-144f-4093-bf44-d8e7ec6b5d66", 00:21:19.328 "no_auto_visible": false 00:21:19.328 } 00:21:19.329 } 00:21:19.329 }, 00:21:19.329 { 00:21:19.329 "method": "nvmf_subsystem_add_listener", 00:21:19.329 "params": { 00:21:19.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.329 "listen_address": { 00:21:19.329 "trtype": "TCP", 00:21:19.329 "adrfam": "IPv4", 00:21:19.329 "traddr": "10.0.0.2", 00:21:19.329 "trsvcid": "4420" 00:21:19.329 }, 00:21:19.329 "secure_channel": true 00:21:19.329 } 00:21:19.329 } 00:21:19.329 ] 00:21:19.329 } 00:21:19.329 ] 00:21:19.329 }' 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=157330 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 157330 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 157330 ']' 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.329 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.329 [2024-12-10 05:46:37.210760] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:19.329 [2024-12-10 05:46:37.210807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.587 [2024-12-10 05:46:37.293053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.587 [2024-12-10 05:46:37.331528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.587 [2024-12-10 05:46:37.331563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.587 [2024-12-10 05:46:37.331570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.587 [2024-12-10 05:46:37.331576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.587 [2024-12-10 05:46:37.331581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.587 [2024-12-10 05:46:37.332142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.846 [2024-12-10 05:46:37.544805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.846 [2024-12-10 05:46:37.576828] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.846 [2024-12-10 05:46:37.577020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.104 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.104 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:20.104 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.104 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.104 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=157485 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 157485 /var/tmp/bdevperf.sock 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 157485 ']' 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.363 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:20.363 "subsystems": [ 00:21:20.363 { 00:21:20.363 "subsystem": "keyring", 00:21:20.363 "config": [ 00:21:20.363 { 00:21:20.363 "method": "keyring_file_add_key", 00:21:20.363 "params": { 00:21:20.363 "name": "key0", 00:21:20.363 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:20.363 } 00:21:20.363 } 00:21:20.363 ] 00:21:20.363 }, 00:21:20.363 { 00:21:20.363 "subsystem": "iobuf", 00:21:20.363 "config": [ 00:21:20.363 { 00:21:20.363 "method": "iobuf_set_options", 00:21:20.363 "params": { 00:21:20.363 "small_pool_count": 8192, 00:21:20.363 "large_pool_count": 1024, 00:21:20.363 "small_bufsize": 8192, 00:21:20.363 "large_bufsize": 135168, 00:21:20.363 "enable_numa": false 00:21:20.363 } 00:21:20.363 } 00:21:20.363 ] 00:21:20.363 }, 00:21:20.363 { 00:21:20.363 "subsystem": "sock", 00:21:20.363 "config": [ 00:21:20.363 { 00:21:20.363 "method": "sock_set_default_impl", 00:21:20.363 "params": { 00:21:20.363 "impl_name": "posix" 00:21:20.363 } 00:21:20.363 }, 00:21:20.363 { 00:21:20.363 "method": "sock_impl_set_options", 00:21:20.363 "params": { 00:21:20.363 "impl_name": "ssl", 00:21:20.363 "recv_buf_size": 4096, 00:21:20.363 "send_buf_size": 4096, 00:21:20.363 "enable_recv_pipe": true, 00:21:20.363 "enable_quickack": false, 00:21:20.363 "enable_placement_id": 0, 00:21:20.363 "enable_zerocopy_send_server": true, 00:21:20.363 "enable_zerocopy_send_client": false, 00:21:20.363 "zerocopy_threshold": 0, 00:21:20.363 "tls_version": 0, 00:21:20.363 "enable_ktls": false 00:21:20.363 } 00:21:20.363 }, 00:21:20.363 { 00:21:20.363 "method": "sock_impl_set_options", 00:21:20.363 "params": { 00:21:20.363 "impl_name": "posix", 00:21:20.363 "recv_buf_size": 2097152, 00:21:20.363 "send_buf_size": 2097152, 00:21:20.363 "enable_recv_pipe": true, 00:21:20.363 "enable_quickack": false, 00:21:20.363 "enable_placement_id": 0, 00:21:20.363 "enable_zerocopy_send_server": true, 00:21:20.363 "enable_zerocopy_send_client": false, 00:21:20.363 "zerocopy_threshold": 0, 00:21:20.363 "tls_version": 0, 00:21:20.363 "enable_ktls": false 00:21:20.363 } 00:21:20.363 } 00:21:20.363 ] 00:21:20.363 }, 00:21:20.363 { 00:21:20.363 "subsystem": "vmd", 00:21:20.363 "config": [] 00:21:20.363 }, 00:21:20.364 { 00:21:20.364 "subsystem": "accel", 00:21:20.364 "config": [ 00:21:20.364 { 00:21:20.364 "method": "accel_set_options", 00:21:20.364 "params": { 00:21:20.364 "small_cache_size": 128, 00:21:20.364 "large_cache_size": 16, 00:21:20.364 "task_count": 2048, 00:21:20.364 "sequence_count": 2048, 00:21:20.364 "buf_count": 2048 00:21:20.364 } 00:21:20.364 } 00:21:20.364 ] 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "subsystem": "bdev", 00:21:20.364 "config": [ 00:21:20.364 { 00:21:20.364 "method": "bdev_set_options", 00:21:20.364 "params": { 00:21:20.364 "bdev_io_pool_size": 65535, 00:21:20.364 "bdev_io_cache_size": 256, 00:21:20.364 "bdev_auto_examine": true, 00:21:20.364 "iobuf_small_cache_size": 128, 00:21:20.364 "iobuf_large_cache_size": 16 00:21:20.364 } 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "method": "bdev_raid_set_options", 00:21:20.364 "params": { 00:21:20.364 "process_window_size_kb": 1024, 00:21:20.364 "process_max_bandwidth_mb_sec": 0 00:21:20.364 } 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "method": "bdev_iscsi_set_options", 00:21:20.364 "params": { 00:21:20.364 "timeout_sec": 30 00:21:20.364 } 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "method": "bdev_nvme_set_options", 00:21:20.364 "params": { 00:21:20.364 "action_on_timeout": "none", 00:21:20.364 "timeout_us": 0, 00:21:20.364 "timeout_admin_us": 0, 00:21:20.364 "keep_alive_timeout_ms": 10000, 00:21:20.364 "arbitration_burst": 0, 00:21:20.364 "low_priority_weight": 0, 00:21:20.364 "medium_priority_weight": 0, 00:21:20.364 "high_priority_weight": 0, 00:21:20.364 "nvme_adminq_poll_period_us": 10000, 00:21:20.364 "nvme_ioq_poll_period_us": 0, 00:21:20.364 "io_queue_requests": 512, 00:21:20.364 "delay_cmd_submit": true, 00:21:20.364 "transport_retry_count": 4, 00:21:20.364 "bdev_retry_count": 3, 00:21:20.364 "transport_ack_timeout": 0, 00:21:20.364 "ctrlr_loss_timeout_sec": 0, 00:21:20.364 "reconnect_delay_sec": 0, 00:21:20.364 "fast_io_fail_timeout_sec": 0, 00:21:20.364 "disable_auto_failback": false, 00:21:20.364 "generate_uuids": false, 00:21:20.364 "transport_tos": 0, 00:21:20.364 "nvme_error_stat": false, 00:21:20.364 "rdma_srq_size": 0, 00:21:20.364 "io_path_stat": false, 00:21:20.364 "allow_accel_sequence": false, 00:21:20.364 "rdma_max_cq_size": 0, 00:21:20.364 "rdma_cm_event_timeout_ms": 0, 00:21:20.364 "dhchap_digests": [ 00:21:20.364 "sha256", 00:21:20.364 "sha384", 00:21:20.364 "sha512" 00:21:20.364 ], 00:21:20.364 "dhchap_dhgroups": [ 00:21:20.364 "null", 00:21:20.364 "ffdhe2048", 00:21:20.364 "ffdhe3072", 00:21:20.364 "ffdhe4096", 00:21:20.364 "ffdhe6144", 00:21:20.364 "ffdhe8192" 00:21:20.364 ], 00:21:20.364 "rdma_umr_per_io": false 00:21:20.364 } 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "method": "bdev_nvme_attach_controller", 00:21:20.364 "params": { 00:21:20.364 "name": "TLSTEST", 00:21:20.364 "trtype": "TCP", 00:21:20.364 "adrfam": "IPv4", 00:21:20.364 "traddr": "10.0.0.2", 00:21:20.364 "trsvcid": "4420", 00:21:20.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.364 "prchk_reftag": false, 00:21:20.364 "prchk_guard": false, 00:21:20.364 "ctrlr_loss_timeout_sec": 0, 00:21:20.364 "reconnect_delay_sec": 0, 00:21:20.364 "fast_io_fail_timeout_sec": 0, 00:21:20.364 "psk": "key0", 00:21:20.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.364 "hdgst": false, 00:21:20.364 "ddgst": false, 00:21:20.364 "multipath": "multipath" 00:21:20.364 } 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "method": "bdev_nvme_set_hotplug", 00:21:20.364 "params": { 00:21:20.364 "period_us": 100000, 00:21:20.364 "enable": false 00:21:20.364 } 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "method": "bdev_wait_for_examine" 00:21:20.364 } 00:21:20.364 ] 00:21:20.364 }, 00:21:20.364 { 00:21:20.364 "subsystem": "nbd", 00:21:20.364 "config": [] 00:21:20.364 } 00:21:20.364 ] 00:21:20.364 }' 00:21:20.364 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.364 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.364 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.364 [2024-12-10 05:46:38.119934] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:20.364 [2024-12-10 05:46:38.119984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157485 ] 00:21:20.364 [2024-12-10 05:46:38.202937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.364 [2024-12-10 05:46:38.243522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.623 [2024-12-10 05:46:38.395319] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.190 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.190 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.190 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:21.190 Running I/O for 10 seconds... 00:21:23.501 5615.00 IOPS, 21.93 MiB/s [2024-12-10T04:46:42.395Z] 5591.00 IOPS, 21.84 MiB/s [2024-12-10T04:46:43.330Z] 5610.33 IOPS, 21.92 MiB/s [2024-12-10T04:46:44.264Z] 5606.00 IOPS, 21.90 MiB/s [2024-12-10T04:46:45.199Z] 5631.20 IOPS, 22.00 MiB/s [2024-12-10T04:46:46.133Z] 5643.67 IOPS, 22.05 MiB/s [2024-12-10T04:46:47.508Z] 5659.86 IOPS, 22.11 MiB/s [2024-12-10T04:46:48.075Z] 5633.75 IOPS, 22.01 MiB/s [2024-12-10T04:46:49.450Z] 5628.78 IOPS, 21.99 MiB/s [2024-12-10T04:46:49.450Z] 5623.20 IOPS, 21.97 MiB/s 00:21:31.491 Latency(us) 00:21:31.491 [2024-12-10T04:46:49.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.491 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:31.491 Verification LBA range: start 0x0 length 0x2000 00:21:31.491 TLSTESTn1 : 10.01 5628.49 21.99 0.00 0.00 22708.20 5430.13 23717.79 00:21:31.491 [2024-12-10T04:46:49.450Z] =================================================================================================================== 00:21:31.491 [2024-12-10T04:46:49.450Z] Total : 5628.49 21.99 0.00 0.00 22708.20 5430.13 23717.79 00:21:31.491 { 00:21:31.491 "results": [ 00:21:31.491 { 00:21:31.491 "job": "TLSTESTn1", 00:21:31.491 "core_mask": "0x4", 00:21:31.491 "workload": "verify", 00:21:31.491 "status": "finished", 00:21:31.491 "verify_range": { 00:21:31.491 "start": 0, 00:21:31.491 "length": 8192 00:21:31.491 }, 00:21:31.491 "queue_depth": 128, 00:21:31.491 "io_size": 4096, 00:21:31.491 "runtime": 10.012625, 00:21:31.491 "iops": 5628.494026291806, 00:21:31.491 "mibps": 21.986304790202368, 00:21:31.491 "io_failed": 0, 00:21:31.491 "io_timeout": 0, 00:21:31.491 "avg_latency_us": 22708.195259929227, 00:21:31.491 "min_latency_us": 5430.125714285714, 00:21:31.491 "max_latency_us": 23717.790476190476 00:21:31.491 } 00:21:31.491 ], 00:21:31.491 "core_count": 1 00:21:31.491 } 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 157485 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 157485 ']' 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 157485 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157485 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157485' 00:21:31.491 killing process with pid 157485 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 157485 00:21:31.491 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.491 00:21:31.491 Latency(us) 00:21:31.491 [2024-12-10T04:46:49.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.491 [2024-12-10T04:46:49.450Z] =================================================================================================================== 00:21:31.491 [2024-12-10T04:46:49.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 157485 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 157330 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 157330 ']' 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 157330 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157330 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157330' 00:21:31.491 killing process with pid 157330 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 157330 00:21:31.491 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 157330 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=159388 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 159388 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 159388 ']' 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.750 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.750 [2024-12-10 05:46:49.594762] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:31.750 [2024-12-10 05:46:49.594807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.750 [2024-12-10 05:46:49.661268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.750 [2024-12-10 05:46:49.700775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.750 [2024-12-10 05:46:49.700810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.750 [2024-12-10 05:46:49.700817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.750 [2024-12-10 05:46:49.700823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.750 [2024-12-10 05:46:49.700829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.750 [2024-12-10 05:46:49.701368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ovQJZlz7JE 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ovQJZlz7JE 00:21:32.009 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.268 [2024-12-10 05:46:50.000791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.268 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:32.526 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:32.526 [2024-12-10 05:46:50.413858] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.527 [2024-12-10 05:46:50.414081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.527 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:32.785 malloc0 00:21:32.785 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.044 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:33.302 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=159643 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 159643 /var/tmp/bdevperf.sock 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 159643 ']' 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.561 [2024-12-10 05:46:51.289620] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:33.561 [2024-12-10 05:46:51.289668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159643 ] 00:21:33.561 [2024-12-10 05:46:51.368195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.561 [2024-12-10 05:46:51.408673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:33.561 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:33.819 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:34.077 [2024-12-10 05:46:51.852084] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.077 nvme0n1 00:21:34.078 05:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.336 Running I/O for 1 seconds... 00:21:35.272 5142.00 IOPS, 20.09 MiB/s 00:21:35.272 Latency(us) 00:21:35.272 [2024-12-10T04:46:53.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.272 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.272 Verification LBA range: start 0x0 length 0x2000 00:21:35.272 nvme0n1 : 1.02 5168.41 20.19 0.00 0.00 24543.02 4431.48 24217.11 00:21:35.272 [2024-12-10T04:46:53.231Z] =================================================================================================================== 00:21:35.272 [2024-12-10T04:46:53.231Z] Total : 5168.41 20.19 0.00 0.00 24543.02 4431.48 24217.11 00:21:35.272 { 00:21:35.272 "results": [ 00:21:35.272 { 00:21:35.272 "job": "nvme0n1", 00:21:35.272 "core_mask": "0x2", 00:21:35.272 "workload": "verify", 00:21:35.272 "status": "finished", 00:21:35.272 "verify_range": { 00:21:35.272 "start": 0, 00:21:35.272 "length": 8192 00:21:35.272 }, 00:21:35.272 "queue_depth": 128, 00:21:35.272 "io_size": 4096, 00:21:35.272 "runtime": 1.019655, 00:21:35.272 "iops": 5168.414806969024, 00:21:35.272 "mibps": 20.18912033972275, 00:21:35.272 "io_failed": 0, 00:21:35.272 "io_timeout": 0, 00:21:35.272 "avg_latency_us": 24543.024738772932, 00:21:35.272 "min_latency_us": 4431.481904761905, 00:21:35.272 "max_latency_us": 24217.11238095238 00:21:35.272 } 00:21:35.272 ], 00:21:35.272 "core_count": 1 00:21:35.272 } 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 159643 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 159643 ']' 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 159643 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159643 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159643' 00:21:35.272 killing process with pid 159643 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 159643 00:21:35.272 Received shutdown signal, test time was about 1.000000 seconds 00:21:35.272 00:21:35.272 Latency(us) 00:21:35.272 [2024-12-10T04:46:53.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.272 [2024-12-10T04:46:53.231Z] =================================================================================================================== 00:21:35.272 [2024-12-10T04:46:53.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.272 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 159643 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 159388 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 159388 ']' 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 159388 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159388 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159388' 00:21:35.530 killing process with pid 159388 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 159388 00:21:35.530 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 159388 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=159958 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 159958 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 159958 ']' 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.789 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.789 [2024-12-10 05:46:53.556504] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:35.789 [2024-12-10 05:46:53.556553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.789 [2024-12-10 05:46:53.641729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.789 [2024-12-10 05:46:53.678876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.789 [2024-12-10 05:46:53.678913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.789 [2024-12-10 05:46:53.678920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.789 [2024-12-10 05:46:53.678927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.789 [2024-12-10 05:46:53.678932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.789 [2024-12-10 05:46:53.679446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.048 [2024-12-10 05:46:53.822942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.048 malloc0 00:21:36.048 [2024-12-10 05:46:53.851061] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.048 [2024-12-10 05:46:53.851276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=160125 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 160125 /var/tmp/bdevperf.sock 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 160125 ']' 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.048 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.048 [2024-12-10 05:46:53.928402] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:36.048 [2024-12-10 05:46:53.928443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160125 ] 00:21:36.307 [2024-12-10 05:46:54.006667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.307 [2024-12-10 05:46:54.045542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.873 05:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.873 05:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:36.873 05:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ovQJZlz7JE 00:21:37.131 05:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:37.387 [2024-12-10 05:46:55.127771] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.387 nvme0n1 00:21:37.387 05:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.387 Running I/O for 1 seconds... 00:21:38.763 5486.00 IOPS, 21.43 MiB/s 00:21:38.763 Latency(us) 00:21:38.763 [2024-12-10T04:46:56.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.763 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:38.763 Verification LBA range: start 0x0 length 0x2000 00:21:38.763 nvme0n1 : 1.01 5543.99 21.66 0.00 0.00 22934.42 5742.20 17975.59 00:21:38.763 [2024-12-10T04:46:56.722Z] =================================================================================================================== 00:21:38.763 [2024-12-10T04:46:56.722Z] Total : 5543.99 21.66 0.00 0.00 22934.42 5742.20 17975.59 00:21:38.763 { 00:21:38.763 "results": [ 00:21:38.763 { 00:21:38.763 "job": "nvme0n1", 00:21:38.763 "core_mask": "0x2", 00:21:38.763 "workload": "verify", 00:21:38.763 "status": "finished", 00:21:38.763 "verify_range": { 00:21:38.763 "start": 0, 00:21:38.763 "length": 8192 00:21:38.763 }, 00:21:38.763 "queue_depth": 128, 00:21:38.763 "io_size": 4096, 00:21:38.763 "runtime": 1.012628, 00:21:38.763 "iops": 5543.99048811607, 00:21:38.763 "mibps": 21.6562128442034, 00:21:38.763 "io_failed": 0, 00:21:38.763 "io_timeout": 0, 00:21:38.763 "avg_latency_us": 22934.423093287187, 00:21:38.763 "min_latency_us": 5742.201904761905, 00:21:38.763 "max_latency_us": 17975.588571428572 00:21:38.763 } 00:21:38.763 ], 00:21:38.763 "core_count": 1 00:21:38.763 } 00:21:38.763 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:38.763 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.763 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.763 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.763 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:38.763 "subsystems": [ 00:21:38.763 { 00:21:38.763 "subsystem": "keyring", 00:21:38.763 "config": [ 00:21:38.763 { 00:21:38.763 "method": "keyring_file_add_key", 00:21:38.763 "params": { 00:21:38.763 "name": "key0", 00:21:38.763 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:38.763 } 00:21:38.763 } 00:21:38.763 ] 00:21:38.763 }, 00:21:38.763 { 00:21:38.763 "subsystem": "iobuf", 00:21:38.763 "config": [ 00:21:38.763 { 00:21:38.763 "method": "iobuf_set_options", 00:21:38.763 "params": { 00:21:38.763 "small_pool_count": 8192, 00:21:38.763 "large_pool_count": 1024, 00:21:38.763 "small_bufsize": 8192, 00:21:38.763 "large_bufsize": 135168, 00:21:38.763 "enable_numa": false 00:21:38.763 } 00:21:38.763 } 00:21:38.763 ] 00:21:38.763 }, 00:21:38.763 { 00:21:38.763 "subsystem": "sock", 00:21:38.763 "config": [ 00:21:38.763 { 00:21:38.763 "method": "sock_set_default_impl", 00:21:38.763 "params": { 00:21:38.763 "impl_name": "posix" 00:21:38.763 } 00:21:38.763 }, 00:21:38.763 { 00:21:38.763 "method": "sock_impl_set_options", 00:21:38.763 "params": { 00:21:38.763 "impl_name": "ssl", 00:21:38.763 "recv_buf_size": 4096, 00:21:38.763 "send_buf_size": 4096, 00:21:38.763 "enable_recv_pipe": true, 00:21:38.763 "enable_quickack": false, 00:21:38.763 "enable_placement_id": 0, 00:21:38.763 "enable_zerocopy_send_server": true, 00:21:38.763 "enable_zerocopy_send_client": false, 00:21:38.763 "zerocopy_threshold": 0, 00:21:38.763 "tls_version": 0, 00:21:38.763 "enable_ktls": false 00:21:38.763 } 00:21:38.763 }, 00:21:38.763 { 00:21:38.763 "method": "sock_impl_set_options", 00:21:38.763 "params": { 00:21:38.763 "impl_name": "posix", 00:21:38.763 "recv_buf_size": 2097152, 00:21:38.763 "send_buf_size": 2097152, 00:21:38.763 "enable_recv_pipe": true, 00:21:38.763 "enable_quickack": false, 00:21:38.763 "enable_placement_id": 0, 00:21:38.763 "enable_zerocopy_send_server": true, 00:21:38.763 "enable_zerocopy_send_client": false, 00:21:38.763 "zerocopy_threshold": 0, 00:21:38.763 "tls_version": 0, 00:21:38.764 "enable_ktls": false 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "vmd", 00:21:38.764 "config": [] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "accel", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "accel_set_options", 00:21:38.764 "params": { 00:21:38.764 "small_cache_size": 128, 00:21:38.764 "large_cache_size": 16, 00:21:38.764 "task_count": 2048, 00:21:38.764 "sequence_count": 2048, 00:21:38.764 "buf_count": 2048 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "bdev", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "bdev_set_options", 00:21:38.764 "params": { 00:21:38.764 "bdev_io_pool_size": 65535, 00:21:38.764 "bdev_io_cache_size": 256, 00:21:38.764 "bdev_auto_examine": true, 00:21:38.764 "iobuf_small_cache_size": 128, 00:21:38.764 "iobuf_large_cache_size": 16 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_raid_set_options", 00:21:38.764 "params": { 00:21:38.764 "process_window_size_kb": 1024, 00:21:38.764 "process_max_bandwidth_mb_sec": 0 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_iscsi_set_options", 00:21:38.764 "params": { 00:21:38.764 "timeout_sec": 30 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_nvme_set_options", 00:21:38.764 "params": { 00:21:38.764 "action_on_timeout": "none", 00:21:38.764 "timeout_us": 0, 00:21:38.764 "timeout_admin_us": 0, 00:21:38.764 "keep_alive_timeout_ms": 10000, 00:21:38.764 "arbitration_burst": 0, 00:21:38.764 "low_priority_weight": 0, 00:21:38.764 "medium_priority_weight": 0, 00:21:38.764 "high_priority_weight": 0, 00:21:38.764 "nvme_adminq_poll_period_us": 10000, 00:21:38.764 "nvme_ioq_poll_period_us": 0, 00:21:38.764 "io_queue_requests": 0, 00:21:38.764 "delay_cmd_submit": true, 00:21:38.764 "transport_retry_count": 4, 00:21:38.764 "bdev_retry_count": 3, 00:21:38.764 "transport_ack_timeout": 0, 00:21:38.764 "ctrlr_loss_timeout_sec": 0, 00:21:38.764 "reconnect_delay_sec": 0, 00:21:38.764 "fast_io_fail_timeout_sec": 0, 00:21:38.764 "disable_auto_failback": false, 00:21:38.764 "generate_uuids": false, 00:21:38.764 "transport_tos": 0, 00:21:38.764 "nvme_error_stat": false, 00:21:38.764 "rdma_srq_size": 0, 00:21:38.764 "io_path_stat": false, 00:21:38.764 "allow_accel_sequence": false, 00:21:38.764 "rdma_max_cq_size": 0, 00:21:38.764 "rdma_cm_event_timeout_ms": 0, 00:21:38.764 "dhchap_digests": [ 00:21:38.764 "sha256", 00:21:38.764 "sha384", 00:21:38.764 "sha512" 00:21:38.764 ], 00:21:38.764 "dhchap_dhgroups": [ 00:21:38.764 "null", 00:21:38.764 "ffdhe2048", 00:21:38.764 "ffdhe3072", 00:21:38.764 "ffdhe4096", 00:21:38.764 "ffdhe6144", 00:21:38.764 "ffdhe8192" 00:21:38.764 ], 00:21:38.764 "rdma_umr_per_io": false 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_nvme_set_hotplug", 00:21:38.764 "params": { 00:21:38.764 "period_us": 100000, 00:21:38.764 "enable": false 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_malloc_create", 00:21:38.764 "params": { 00:21:38.764 "name": "malloc0", 00:21:38.764 "num_blocks": 8192, 00:21:38.764 "block_size": 4096, 00:21:38.764 "physical_block_size": 4096, 00:21:38.764 "uuid": "d8f1f1ad-8d3e-4c20-9249-5960fa2f8be1", 00:21:38.764 "optimal_io_boundary": 0, 00:21:38.764 "md_size": 0, 00:21:38.764 "dif_type": 0, 00:21:38.764 "dif_is_head_of_md": false, 00:21:38.764 "dif_pi_format": 0 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_wait_for_examine" 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "nbd", 00:21:38.764 "config": [] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "scheduler", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "framework_set_scheduler", 00:21:38.764 "params": { 00:21:38.764 "name": "static" 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "nvmf", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "nvmf_set_config", 00:21:38.764 "params": { 00:21:38.764 "discovery_filter": "match_any", 00:21:38.764 "admin_cmd_passthru": { 00:21:38.764 "identify_ctrlr": false 00:21:38.764 }, 00:21:38.764 "dhchap_digests": [ 00:21:38.764 "sha256", 00:21:38.764 "sha384", 00:21:38.764 "sha512" 00:21:38.764 ], 00:21:38.764 "dhchap_dhgroups": [ 00:21:38.764 "null", 00:21:38.764 "ffdhe2048", 00:21:38.764 "ffdhe3072", 00:21:38.764 "ffdhe4096", 00:21:38.764 "ffdhe6144", 00:21:38.764 "ffdhe8192" 00:21:38.764 ] 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_set_max_subsystems", 00:21:38.764 "params": { 00:21:38.764 "max_subsystems": 1024 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_set_crdt", 00:21:38.764 "params": { 00:21:38.764 "crdt1": 0, 00:21:38.764 "crdt2": 0, 00:21:38.764 "crdt3": 0 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_create_transport", 00:21:38.764 "params": { 00:21:38.764 "trtype": "TCP", 00:21:38.764 "max_queue_depth": 128, 00:21:38.764 "max_io_qpairs_per_ctrlr": 127, 00:21:38.764 "in_capsule_data_size": 4096, 00:21:38.764 "max_io_size": 131072, 00:21:38.764 "io_unit_size": 131072, 00:21:38.764 "max_aq_depth": 128, 00:21:38.764 "num_shared_buffers": 511, 00:21:38.764 "buf_cache_size": 4294967295, 00:21:38.764 "dif_insert_or_strip": false, 00:21:38.764 "zcopy": false, 00:21:38.764 "c2h_success": false, 00:21:38.764 "sock_priority": 0, 00:21:38.764 "abort_timeout_sec": 1, 00:21:38.764 "ack_timeout": 0, 00:21:38.764 "data_wr_pool_size": 0 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_create_subsystem", 00:21:38.764 "params": { 00:21:38.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.764 "allow_any_host": false, 00:21:38.764 "serial_number": "00000000000000000000", 00:21:38.764 "model_number": "SPDK bdev Controller", 00:21:38.764 "max_namespaces": 32, 00:21:38.764 "min_cntlid": 1, 00:21:38.764 "max_cntlid": 65519, 00:21:38.764 "ana_reporting": false 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_subsystem_add_host", 00:21:38.764 "params": { 00:21:38.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.764 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.764 "psk": "key0" 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_subsystem_add_ns", 00:21:38.764 "params": { 00:21:38.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.764 "namespace": { 00:21:38.764 "nsid": 1, 00:21:38.764 "bdev_name": "malloc0", 00:21:38.764 "nguid": "D8F1F1AD8D3E4C2092495960FA2F8BE1", 00:21:38.764 "uuid": "d8f1f1ad-8d3e-4c20-9249-5960fa2f8be1", 00:21:38.764 "no_auto_visible": false 00:21:38.764 } 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "nvmf_subsystem_add_listener", 00:21:38.764 "params": { 00:21:38.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.764 "listen_address": { 00:21:38.764 "trtype": "TCP", 00:21:38.764 "adrfam": "IPv4", 00:21:38.764 "traddr": "10.0.0.2", 00:21:38.764 "trsvcid": "4420" 00:21:38.764 }, 00:21:38.764 "secure_channel": false, 00:21:38.764 "sock_impl": "ssl" 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }' 00:21:38.764 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:38.764 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:38.764 "subsystems": [ 00:21:38.764 { 00:21:38.764 "subsystem": "keyring", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "keyring_file_add_key", 00:21:38.764 "params": { 00:21:38.764 "name": "key0", 00:21:38.764 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "iobuf", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "iobuf_set_options", 00:21:38.764 "params": { 00:21:38.764 "small_pool_count": 8192, 00:21:38.764 "large_pool_count": 1024, 00:21:38.764 "small_bufsize": 8192, 00:21:38.764 "large_bufsize": 135168, 00:21:38.764 "enable_numa": false 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "sock", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "sock_set_default_impl", 00:21:38.764 "params": { 00:21:38.764 "impl_name": "posix" 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "sock_impl_set_options", 00:21:38.764 "params": { 00:21:38.764 "impl_name": "ssl", 00:21:38.764 "recv_buf_size": 4096, 00:21:38.764 "send_buf_size": 4096, 00:21:38.764 "enable_recv_pipe": true, 00:21:38.764 "enable_quickack": false, 00:21:38.764 "enable_placement_id": 0, 00:21:38.764 "enable_zerocopy_send_server": true, 00:21:38.764 "enable_zerocopy_send_client": false, 00:21:38.764 "zerocopy_threshold": 0, 00:21:38.764 "tls_version": 0, 00:21:38.764 "enable_ktls": false 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "sock_impl_set_options", 00:21:38.764 "params": { 00:21:38.764 "impl_name": "posix", 00:21:38.764 "recv_buf_size": 2097152, 00:21:38.764 "send_buf_size": 2097152, 00:21:38.764 "enable_recv_pipe": true, 00:21:38.764 "enable_quickack": false, 00:21:38.764 "enable_placement_id": 0, 00:21:38.764 "enable_zerocopy_send_server": true, 00:21:38.764 "enable_zerocopy_send_client": false, 00:21:38.764 "zerocopy_threshold": 0, 00:21:38.764 "tls_version": 0, 00:21:38.764 "enable_ktls": false 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "vmd", 00:21:38.764 "config": [] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "accel", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "accel_set_options", 00:21:38.764 "params": { 00:21:38.764 "small_cache_size": 128, 00:21:38.764 "large_cache_size": 16, 00:21:38.764 "task_count": 2048, 00:21:38.764 "sequence_count": 2048, 00:21:38.764 "buf_count": 2048 00:21:38.764 } 00:21:38.764 } 00:21:38.764 ] 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "subsystem": "bdev", 00:21:38.764 "config": [ 00:21:38.764 { 00:21:38.764 "method": "bdev_set_options", 00:21:38.764 "params": { 00:21:38.764 "bdev_io_pool_size": 65535, 00:21:38.764 "bdev_io_cache_size": 256, 00:21:38.764 "bdev_auto_examine": true, 00:21:38.764 "iobuf_small_cache_size": 128, 00:21:38.764 "iobuf_large_cache_size": 16 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_raid_set_options", 00:21:38.764 "params": { 00:21:38.764 "process_window_size_kb": 1024, 00:21:38.764 "process_max_bandwidth_mb_sec": 0 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_iscsi_set_options", 00:21:38.764 "params": { 00:21:38.764 "timeout_sec": 30 00:21:38.764 } 00:21:38.764 }, 00:21:38.764 { 00:21:38.764 "method": "bdev_nvme_set_options", 00:21:38.764 "params": { 00:21:38.764 "action_on_timeout": "none", 00:21:38.764 "timeout_us": 0, 00:21:38.764 "timeout_admin_us": 0, 00:21:38.764 "keep_alive_timeout_ms": 10000, 00:21:38.764 "arbitration_burst": 0, 00:21:38.764 "low_priority_weight": 0, 00:21:38.764 "medium_priority_weight": 0, 00:21:38.764 "high_priority_weight": 0, 00:21:38.764 "nvme_adminq_poll_period_us": 10000, 00:21:38.764 "nvme_ioq_poll_period_us": 0, 00:21:38.764 "io_queue_requests": 512, 00:21:38.764 "delay_cmd_submit": true, 00:21:38.764 "transport_retry_count": 4, 00:21:38.764 "bdev_retry_count": 3, 00:21:38.764 "transport_ack_timeout": 0, 00:21:38.764 "ctrlr_loss_timeout_sec": 0, 00:21:38.764 "reconnect_delay_sec": 0, 00:21:38.764 "fast_io_fail_timeout_sec": 0, 00:21:38.764 "disable_auto_failback": false, 00:21:38.764 "generate_uuids": false, 00:21:38.764 "transport_tos": 0, 00:21:38.764 "nvme_error_stat": false, 00:21:38.764 "rdma_srq_size": 0, 00:21:38.764 "io_path_stat": false, 00:21:38.764 "allow_accel_sequence": false, 00:21:38.764 "rdma_max_cq_size": 0, 00:21:38.764 "rdma_cm_event_timeout_ms": 0, 00:21:38.764 "dhchap_digests": [ 00:21:38.764 "sha256", 00:21:38.764 "sha384", 00:21:38.765 "sha512" 00:21:38.765 ], 00:21:38.765 "dhchap_dhgroups": [ 00:21:38.765 "null", 00:21:38.765 "ffdhe2048", 00:21:38.765 "ffdhe3072", 00:21:38.765 "ffdhe4096", 00:21:38.765 "ffdhe6144", 00:21:38.765 "ffdhe8192" 00:21:38.765 ], 00:21:38.765 "rdma_umr_per_io": false 00:21:38.765 } 00:21:38.765 }, 00:21:38.765 { 00:21:38.765 "method": "bdev_nvme_attach_controller", 00:21:38.765 "params": { 00:21:38.765 "name": "nvme0", 00:21:38.765 "trtype": "TCP", 00:21:38.765 "adrfam": "IPv4", 00:21:38.765 "traddr": "10.0.0.2", 00:21:38.765 "trsvcid": "4420", 00:21:38.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.765 "prchk_reftag": false, 00:21:38.765 "prchk_guard": false, 00:21:38.765 "ctrlr_loss_timeout_sec": 0, 00:21:38.765 "reconnect_delay_sec": 0, 00:21:38.765 "fast_io_fail_timeout_sec": 0, 00:21:38.765 "psk": "key0", 00:21:38.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.765 "hdgst": false, 00:21:38.765 "ddgst": false, 00:21:38.765 "multipath": "multipath" 00:21:38.765 } 00:21:38.765 }, 00:21:38.765 { 00:21:38.765 "method": "bdev_nvme_set_hotplug", 00:21:38.765 "params": { 00:21:38.765 "period_us": 100000, 00:21:38.765 "enable": false 00:21:38.765 } 00:21:38.765 }, 00:21:38.765 { 00:21:38.765 "method": "bdev_enable_histogram", 00:21:38.765 "params": { 00:21:38.765 "name": "nvme0n1", 00:21:38.765 "enable": true 00:21:38.765 } 00:21:38.765 }, 00:21:38.765 { 00:21:38.765 "method": "bdev_wait_for_examine" 00:21:38.765 } 00:21:38.765 ] 00:21:38.765 }, 00:21:38.765 { 00:21:38.765 "subsystem": "nbd", 00:21:38.765 "config": [] 00:21:38.765 } 00:21:38.765 ] 00:21:38.765 }' 00:21:38.765 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 160125 00:21:38.765 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 160125 ']' 00:21:38.765 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 160125 00:21:38.765 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160125 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160125' 00:21:39.024 killing process with pid 160125 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 160125 00:21:39.024 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.024 00:21:39.024 Latency(us) 00:21:39.024 [2024-12-10T04:46:56.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.024 [2024-12-10T04:46:56.983Z] =================================================================================================================== 00:21:39.024 [2024-12-10T04:46:56.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 160125 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 159958 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 159958 ']' 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 159958 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159958 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159958' 00:21:39.024 killing process with pid 159958 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 159958 00:21:39.024 05:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 159958 00:21:39.284 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:39.284 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.284 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.284 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:39.284 "subsystems": [ 00:21:39.284 { 00:21:39.284 "subsystem": "keyring", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "keyring_file_add_key", 00:21:39.284 "params": { 00:21:39.284 "name": "key0", 00:21:39.284 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:39.284 } 00:21:39.284 } 00:21:39.284 ] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "iobuf", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "iobuf_set_options", 00:21:39.284 "params": { 00:21:39.284 "small_pool_count": 8192, 00:21:39.284 "large_pool_count": 1024, 00:21:39.284 "small_bufsize": 8192, 00:21:39.284 "large_bufsize": 135168, 00:21:39.284 "enable_numa": false 00:21:39.284 } 00:21:39.284 } 00:21:39.284 ] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "sock", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "sock_set_default_impl", 00:21:39.284 "params": { 00:21:39.284 "impl_name": "posix" 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "sock_impl_set_options", 00:21:39.284 "params": { 00:21:39.284 "impl_name": "ssl", 00:21:39.284 "recv_buf_size": 4096, 00:21:39.284 "send_buf_size": 4096, 00:21:39.284 "enable_recv_pipe": true, 00:21:39.284 "enable_quickack": false, 00:21:39.284 "enable_placement_id": 0, 00:21:39.284 "enable_zerocopy_send_server": true, 00:21:39.284 "enable_zerocopy_send_client": false, 00:21:39.284 "zerocopy_threshold": 0, 00:21:39.284 "tls_version": 0, 00:21:39.284 "enable_ktls": false 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "sock_impl_set_options", 00:21:39.284 "params": { 00:21:39.284 "impl_name": "posix", 00:21:39.284 "recv_buf_size": 2097152, 00:21:39.284 "send_buf_size": 2097152, 00:21:39.284 "enable_recv_pipe": true, 00:21:39.284 "enable_quickack": false, 00:21:39.284 "enable_placement_id": 0, 00:21:39.284 "enable_zerocopy_send_server": true, 00:21:39.284 "enable_zerocopy_send_client": false, 00:21:39.284 "zerocopy_threshold": 0, 00:21:39.284 "tls_version": 0, 00:21:39.284 "enable_ktls": false 00:21:39.284 } 00:21:39.284 } 00:21:39.284 ] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "vmd", 00:21:39.284 "config": [] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "accel", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "accel_set_options", 00:21:39.284 "params": { 00:21:39.284 "small_cache_size": 128, 00:21:39.284 "large_cache_size": 16, 00:21:39.284 "task_count": 2048, 00:21:39.284 "sequence_count": 2048, 00:21:39.284 "buf_count": 2048 00:21:39.284 } 00:21:39.284 } 00:21:39.284 ] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "bdev", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "bdev_set_options", 00:21:39.284 "params": { 00:21:39.284 "bdev_io_pool_size": 65535, 00:21:39.284 "bdev_io_cache_size": 256, 00:21:39.284 "bdev_auto_examine": true, 00:21:39.284 "iobuf_small_cache_size": 128, 00:21:39.284 "iobuf_large_cache_size": 16 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "bdev_raid_set_options", 00:21:39.284 "params": { 00:21:39.284 "process_window_size_kb": 1024, 00:21:39.284 "process_max_bandwidth_mb_sec": 0 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "bdev_iscsi_set_options", 00:21:39.284 "params": { 00:21:39.284 "timeout_sec": 30 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "bdev_nvme_set_options", 00:21:39.284 "params": { 00:21:39.284 "action_on_timeout": "none", 00:21:39.284 "timeout_us": 0, 00:21:39.284 "timeout_admin_us": 0, 00:21:39.284 "keep_alive_timeout_ms": 10000, 00:21:39.284 "arbitration_burst": 0, 00:21:39.284 "low_priority_weight": 0, 00:21:39.284 "medium_priority_weight": 0, 00:21:39.284 "high_priority_weight": 0, 00:21:39.284 "nvme_adminq_poll_period_us": 10000, 00:21:39.284 "nvme_ioq_poll_period_us": 0, 00:21:39.284 "io_queue_requests": 0, 00:21:39.284 "delay_cmd_submit": true, 00:21:39.284 "transport_retry_count": 4, 00:21:39.284 "bdev_retry_count": 3, 00:21:39.284 "transport_ack_timeout": 0, 00:21:39.284 "ctrlr_loss_timeout_sec": 0, 00:21:39.284 "reconnect_delay_sec": 0, 00:21:39.284 "fast_io_fail_timeout_sec": 0, 00:21:39.284 "disable_auto_failback": false, 00:21:39.284 "generate_uuids": false, 00:21:39.284 "transport_tos": 0, 00:21:39.284 "nvme_error_stat": false, 00:21:39.284 "rdma_srq_size": 0, 00:21:39.284 "io_path_stat": false, 00:21:39.284 "allow_accel_sequence": false, 00:21:39.284 "rdma_max_cq_size": 0, 00:21:39.284 "rdma_cm_event_timeout_ms": 0, 00:21:39.284 "dhchap_digests": [ 00:21:39.284 "sha256", 00:21:39.284 "sha384", 00:21:39.284 "sha512" 00:21:39.284 ], 00:21:39.284 "dhchap_dhgroups": [ 00:21:39.284 "null", 00:21:39.284 "ffdhe2048", 00:21:39.284 "ffdhe3072", 00:21:39.284 "ffdhe4096", 00:21:39.284 "ffdhe6144", 00:21:39.284 "ffdhe8192" 00:21:39.284 ], 00:21:39.284 "rdma_umr_per_io": false 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "bdev_nvme_set_hotplug", 00:21:39.284 "params": { 00:21:39.284 "period_us": 100000, 00:21:39.284 "enable": false 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "bdev_malloc_create", 00:21:39.284 "params": { 00:21:39.284 "name": "malloc0", 00:21:39.284 "num_blocks": 8192, 00:21:39.284 "block_size": 4096, 00:21:39.284 "physical_block_size": 4096, 00:21:39.284 "uuid": "d8f1f1ad-8d3e-4c20-9249-5960fa2f8be1", 00:21:39.284 "optimal_io_boundary": 0, 00:21:39.284 "md_size": 0, 00:21:39.284 "dif_type": 0, 00:21:39.284 "dif_is_head_of_md": false, 00:21:39.284 "dif_pi_format": 0 00:21:39.284 } 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "method": "bdev_wait_for_examine" 00:21:39.284 } 00:21:39.284 ] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "nbd", 00:21:39.284 "config": [] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "scheduler", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "framework_set_scheduler", 00:21:39.284 "params": { 00:21:39.284 "name": "static" 00:21:39.284 } 00:21:39.284 } 00:21:39.284 ] 00:21:39.284 }, 00:21:39.284 { 00:21:39.284 "subsystem": "nvmf", 00:21:39.284 "config": [ 00:21:39.284 { 00:21:39.284 "method": "nvmf_set_config", 00:21:39.284 "params": { 00:21:39.285 "discovery_filter": "match_any", 00:21:39.285 "admin_cmd_passthru": { 00:21:39.285 "identify_ctrlr": false 00:21:39.285 }, 00:21:39.285 "dhchap_digests": [ 00:21:39.285 "sha256", 00:21:39.285 "sha384", 00:21:39.285 "sha512" 00:21:39.285 ], 00:21:39.285 "dhchap_dhgroups": [ 00:21:39.285 "null", 00:21:39.285 "ffdhe2048", 00:21:39.285 "ffdhe3072", 00:21:39.285 "ffdhe4096", 00:21:39.285 "ffdhe6144", 00:21:39.285 "ffdhe8192" 00:21:39.285 ] 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_set_max_subsystems", 00:21:39.285 "params": { 00:21:39.285 "max_subsystems": 1024 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_set_crdt", 00:21:39.285 "params": { 00:21:39.285 "crdt1": 0, 00:21:39.285 "crdt2": 0, 00:21:39.285 "crdt3": 0 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_create_transport", 00:21:39.285 "params": { 00:21:39.285 "trtype": "TCP", 00:21:39.285 "max_queue_depth": 128, 00:21:39.285 "max_io_qpairs_per_ctrlr": 127, 00:21:39.285 "in_capsule_data_size": 4096, 00:21:39.285 "max_io_size": 131072, 00:21:39.285 "io_unit_size": 131072, 00:21:39.285 "max_aq_depth": 128, 00:21:39.285 "num_shared_buffers": 511, 00:21:39.285 "buf_cache_size": 4294967295, 00:21:39.285 "dif_insert_or_strip": false, 00:21:39.285 "zcopy": false, 00:21:39.285 "c2h_success": false, 00:21:39.285 "sock_priority": 0, 00:21:39.285 "abort_timeout_sec": 1, 00:21:39.285 "ack_timeout": 0, 00:21:39.285 "data_wr_pool_size": 0 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_create_subsystem", 00:21:39.285 "params": { 00:21:39.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.285 "allow_any_host": false, 00:21:39.285 "serial_number": "00000000000000000000", 00:21:39.285 "model_number": "SPDK bdev Controller", 00:21:39.285 "max_namespaces": 32, 00:21:39.285 "min_cntlid": 1, 00:21:39.285 "max_cntlid": 65519, 00:21:39.285 "ana_reporting": false 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_subsystem_add_host", 00:21:39.285 "params": { 00:21:39.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.285 "host": "nqn.2016-06.io.spdk:host1", 00:21:39.285 "psk": "key0" 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_subsystem_add_ns", 00:21:39.285 "params": { 00:21:39.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.285 "namespace": { 00:21:39.285 "nsid": 1, 00:21:39.285 "bdev_name": "malloc0", 00:21:39.285 "nguid": "D8F1F1AD8D3E4C2092495960FA2F8BE1", 00:21:39.285 "uuid": "d8f1f1ad-8d3e-4c20-9249-5960fa2f8be1", 00:21:39.285 "no_auto_visible": false 00:21:39.285 } 00:21:39.285 } 00:21:39.285 }, 00:21:39.285 { 00:21:39.285 "method": "nvmf_subsystem_add_listener", 00:21:39.285 "params": { 00:21:39.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.285 "listen_address": { 00:21:39.285 "trtype": "TCP", 00:21:39.285 "adrfam": "IPv4", 00:21:39.285 "traddr": "10.0.0.2", 00:21:39.285 "trsvcid": "4420" 00:21:39.285 }, 00:21:39.285 "secure_channel": false, 00:21:39.285 "sock_impl": "ssl" 00:21:39.285 } 00:21:39.285 } 00:21:39.285 ] 00:21:39.285 } 00:21:39.285 ] 00:21:39.285 }' 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=160603 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 160603 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 160603 ']' 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.285 05:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.285 [2024-12-10 05:46:57.185158] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:39.285 [2024-12-10 05:46:57.185204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.544 [2024-12-10 05:46:57.267920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.544 [2024-12-10 05:46:57.306087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.544 [2024-12-10 05:46:57.306128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.544 [2024-12-10 05:46:57.306135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.544 [2024-12-10 05:46:57.306144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.544 [2024-12-10 05:46:57.306149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.544 [2024-12-10 05:46:57.306736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.802 [2024-12-10 05:46:57.519113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.802 [2024-12-10 05:46:57.551146] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.802 [2024-12-10 05:46:57.551355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=160842 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 160842 /var/tmp/bdevperf.sock 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 160842 ']' 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.370 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:40.370 "subsystems": [ 00:21:40.370 { 00:21:40.370 "subsystem": "keyring", 00:21:40.370 "config": [ 00:21:40.370 { 00:21:40.370 "method": "keyring_file_add_key", 00:21:40.370 "params": { 00:21:40.370 "name": "key0", 00:21:40.370 "path": "/tmp/tmp.ovQJZlz7JE" 00:21:40.370 } 00:21:40.370 } 00:21:40.370 ] 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "subsystem": "iobuf", 00:21:40.370 "config": [ 00:21:40.370 { 00:21:40.370 "method": "iobuf_set_options", 00:21:40.370 "params": { 00:21:40.370 "small_pool_count": 8192, 00:21:40.370 "large_pool_count": 1024, 00:21:40.370 "small_bufsize": 8192, 00:21:40.370 "large_bufsize": 135168, 00:21:40.370 "enable_numa": false 00:21:40.370 } 00:21:40.370 } 00:21:40.370 ] 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "subsystem": "sock", 00:21:40.370 "config": [ 00:21:40.370 { 00:21:40.370 "method": "sock_set_default_impl", 00:21:40.370 "params": { 00:21:40.370 "impl_name": "posix" 00:21:40.370 } 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "method": "sock_impl_set_options", 00:21:40.370 "params": { 00:21:40.370 "impl_name": "ssl", 00:21:40.370 "recv_buf_size": 4096, 00:21:40.370 "send_buf_size": 4096, 00:21:40.370 "enable_recv_pipe": true, 00:21:40.370 "enable_quickack": false, 00:21:40.370 "enable_placement_id": 0, 00:21:40.370 "enable_zerocopy_send_server": true, 00:21:40.370 "enable_zerocopy_send_client": false, 00:21:40.370 "zerocopy_threshold": 0, 00:21:40.370 "tls_version": 0, 00:21:40.370 "enable_ktls": false 00:21:40.370 } 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "method": "sock_impl_set_options", 00:21:40.370 "params": { 00:21:40.370 "impl_name": "posix", 00:21:40.370 "recv_buf_size": 2097152, 00:21:40.370 "send_buf_size": 2097152, 00:21:40.370 "enable_recv_pipe": true, 00:21:40.370 "enable_quickack": false, 00:21:40.370 "enable_placement_id": 0, 00:21:40.370 "enable_zerocopy_send_server": true, 00:21:40.370 "enable_zerocopy_send_client": false, 00:21:40.370 "zerocopy_threshold": 0, 00:21:40.370 "tls_version": 0, 00:21:40.370 "enable_ktls": false 00:21:40.370 } 00:21:40.370 } 00:21:40.370 ] 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "subsystem": "vmd", 00:21:40.370 "config": [] 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "subsystem": "accel", 00:21:40.370 "config": [ 00:21:40.370 { 00:21:40.370 "method": "accel_set_options", 00:21:40.370 "params": { 00:21:40.370 "small_cache_size": 128, 00:21:40.370 "large_cache_size": 16, 00:21:40.370 "task_count": 2048, 00:21:40.370 "sequence_count": 2048, 00:21:40.370 "buf_count": 2048 00:21:40.370 } 00:21:40.370 } 00:21:40.370 ] 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "subsystem": "bdev", 00:21:40.370 "config": [ 00:21:40.370 { 00:21:40.370 "method": "bdev_set_options", 00:21:40.370 "params": { 00:21:40.370 "bdev_io_pool_size": 65535, 00:21:40.370 "bdev_io_cache_size": 256, 00:21:40.370 "bdev_auto_examine": true, 00:21:40.370 "iobuf_small_cache_size": 128, 00:21:40.371 "iobuf_large_cache_size": 16 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_raid_set_options", 00:21:40.371 "params": { 00:21:40.371 "process_window_size_kb": 1024, 00:21:40.371 "process_max_bandwidth_mb_sec": 0 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_iscsi_set_options", 00:21:40.371 "params": { 00:21:40.371 "timeout_sec": 30 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_nvme_set_options", 00:21:40.371 "params": { 00:21:40.371 "action_on_timeout": "none", 00:21:40.371 "timeout_us": 0, 00:21:40.371 "timeout_admin_us": 0, 00:21:40.371 "keep_alive_timeout_ms": 10000, 00:21:40.371 "arbitration_burst": 0, 00:21:40.371 "low_priority_weight": 0, 00:21:40.371 "medium_priority_weight": 0, 00:21:40.371 "high_priority_weight": 0, 00:21:40.371 "nvme_adminq_poll_period_us": 10000, 00:21:40.371 "nvme_ioq_poll_period_us": 0, 00:21:40.371 "io_queue_requests": 512, 00:21:40.371 "delay_cmd_submit": true, 00:21:40.371 "transport_retry_count": 4, 00:21:40.371 "bdev_retry_count": 3, 00:21:40.371 "transport_ack_timeout": 0, 00:21:40.371 "ctrlr_loss_timeout_sec": 0, 00:21:40.371 "reconnect_delay_sec": 0, 00:21:40.371 "fast_io_fail_timeout_sec": 0, 00:21:40.371 "disable_auto_failback": false, 00:21:40.371 "generate_uuids": false, 00:21:40.371 "transport_tos": 0, 00:21:40.371 "nvme_error_stat": false, 00:21:40.371 "rdma_srq_size": 0, 00:21:40.371 "io_path_stat": false, 00:21:40.371 "allow_accel_sequence": false, 00:21:40.371 "rdma_max_cq_size": 0, 00:21:40.371 "rdma_cm_event_timeout_ms": 0, 00:21:40.371 "dhchap_digests": [ 00:21:40.371 "sha256", 00:21:40.371 "sha384", 00:21:40.371 "sha512" 00:21:40.371 ], 00:21:40.371 "dhchap_dhgroups": [ 00:21:40.371 "null", 00:21:40.371 "ffdhe2048", 00:21:40.371 "ffdhe3072", 00:21:40.371 "ffdhe4096", 00:21:40.371 "ffdhe6144", 00:21:40.371 "ffdhe8192" 00:21:40.371 ], 00:21:40.371 "rdma_umr_per_io": false 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_nvme_attach_controller", 00:21:40.371 "params": { 00:21:40.371 "name": "nvme0", 00:21:40.371 "trtype": "TCP", 00:21:40.371 "adrfam": "IPv4", 00:21:40.371 "traddr": "10.0.0.2", 00:21:40.371 "trsvcid": "4420", 00:21:40.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.371 "prchk_reftag": false, 00:21:40.371 "prchk_guard": false, 00:21:40.371 "ctrlr_loss_timeout_sec": 0, 00:21:40.371 "reconnect_delay_sec": 0, 00:21:40.371 "fast_io_fail_timeout_sec": 0, 00:21:40.371 "psk": "key0", 00:21:40.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.371 "hdgst": false, 00:21:40.371 "ddgst": false, 00:21:40.371 "multipath": "multipath" 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_nvme_set_hotplug", 00:21:40.371 "params": { 00:21:40.371 "period_us": 100000, 00:21:40.371 "enable": false 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_enable_histogram", 00:21:40.371 "params": { 00:21:40.371 "name": "nvme0n1", 00:21:40.371 "enable": true 00:21:40.371 } 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "method": "bdev_wait_for_examine" 00:21:40.371 } 00:21:40.371 ] 00:21:40.371 }, 00:21:40.371 { 00:21:40.371 "subsystem": "nbd", 00:21:40.371 "config": [] 00:21:40.371 } 00:21:40.371 ] 00:21:40.371 }' 00:21:40.371 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.371 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.371 [2024-12-10 05:46:58.112477] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:40.371 [2024-12-10 05:46:58.112525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160842 ] 00:21:40.371 [2024-12-10 05:46:58.194490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.371 [2024-12-10 05:46:58.233474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.630 [2024-12-10 05:46:58.387348] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.196 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.196 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:41.196 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.196 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:41.455 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.455 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.455 Running I/O for 1 seconds... 00:21:42.390 5440.00 IOPS, 21.25 MiB/s 00:21:42.390 Latency(us) 00:21:42.390 [2024-12-10T04:47:00.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.390 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:42.390 Verification LBA range: start 0x0 length 0x2000 00:21:42.390 nvme0n1 : 1.01 5499.50 21.48 0.00 0.00 23121.01 5398.92 20347.37 00:21:42.390 [2024-12-10T04:47:00.349Z] =================================================================================================================== 00:21:42.390 [2024-12-10T04:47:00.349Z] Total : 5499.50 21.48 0.00 0.00 23121.01 5398.92 20347.37 00:21:42.390 { 00:21:42.390 "results": [ 00:21:42.390 { 00:21:42.390 "job": "nvme0n1", 00:21:42.390 "core_mask": "0x2", 00:21:42.390 "workload": "verify", 00:21:42.390 "status": "finished", 00:21:42.390 "verify_range": { 00:21:42.390 "start": 0, 00:21:42.390 "length": 8192 00:21:42.390 }, 00:21:42.390 "queue_depth": 128, 00:21:42.390 "io_size": 4096, 00:21:42.390 "runtime": 1.012638, 00:21:42.390 "iops": 5499.497352459615, 00:21:42.390 "mibps": 21.482411533045372, 00:21:42.390 "io_failed": 0, 00:21:42.390 "io_timeout": 0, 00:21:42.390 "avg_latency_us": 23121.00990619843, 00:21:42.390 "min_latency_us": 5398.918095238095, 00:21:42.390 "max_latency_us": 20347.367619047618 00:21:42.390 } 00:21:42.390 ], 00:21:42.390 "core_count": 1 00:21:42.390 } 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:42.390 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:42.390 nvmf_trace.0 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 160842 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 160842 ']' 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 160842 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160842 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160842' 00:21:42.649 killing process with pid 160842 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 160842 00:21:42.649 Received shutdown signal, test time was about 1.000000 seconds 00:21:42.649 00:21:42.649 Latency(us) 00:21:42.649 [2024-12-10T04:47:00.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.649 [2024-12-10T04:47:00.608Z] =================================================================================================================== 00:21:42.649 [2024-12-10T04:47:00.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 160842 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.649 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.649 rmmod nvme_tcp 00:21:42.649 rmmod nvme_fabrics 00:21:42.908 rmmod nvme_keyring 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 160603 ']' 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 160603 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 160603 ']' 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 160603 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160603 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160603' 00:21:42.908 killing process with pid 160603 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 160603 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 160603 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.908 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.167 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.167 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.167 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.167 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.167 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aiG4XYFVuk /tmp/tmp.8i8Yo0Gtop /tmp/tmp.ovQJZlz7JE 00:21:45.070 00:21:45.070 real 1m22.093s 00:21:45.070 user 2m4.616s 00:21:45.070 sys 0m31.208s 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 ************************************ 00:21:45.070 END TEST nvmf_tls 00:21:45.070 ************************************ 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.070 05:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 ************************************ 00:21:45.070 START TEST nvmf_fips 00:21:45.070 ************************************ 00:21:45.070 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:45.330 * Looking for test storage... 00:21:45.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:45.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.330 --rc genhtml_branch_coverage=1 00:21:45.330 --rc genhtml_function_coverage=1 00:21:45.330 --rc genhtml_legend=1 00:21:45.330 --rc geninfo_all_blocks=1 00:21:45.330 --rc geninfo_unexecuted_blocks=1 00:21:45.330 00:21:45.330 ' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:45.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.330 --rc genhtml_branch_coverage=1 00:21:45.330 --rc genhtml_function_coverage=1 00:21:45.330 --rc genhtml_legend=1 00:21:45.330 --rc geninfo_all_blocks=1 00:21:45.330 --rc geninfo_unexecuted_blocks=1 00:21:45.330 00:21:45.330 ' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:45.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.330 --rc genhtml_branch_coverage=1 00:21:45.330 --rc genhtml_function_coverage=1 00:21:45.330 --rc genhtml_legend=1 00:21:45.330 --rc geninfo_all_blocks=1 00:21:45.330 --rc geninfo_unexecuted_blocks=1 00:21:45.330 00:21:45.330 ' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:45.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.330 --rc genhtml_branch_coverage=1 00:21:45.330 --rc genhtml_function_coverage=1 00:21:45.330 --rc genhtml_legend=1 00:21:45.330 --rc geninfo_all_blocks=1 00:21:45.330 --rc geninfo_unexecuted_blocks=1 00:21:45.330 00:21:45.330 ' 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.330 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:45.331 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:45.591 Error setting digest 00:21:45.591 40920708047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:45.591 40920708047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.591 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:52.160 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:52.160 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:52.160 Found net devices under 0000:af:00.0: cvl_0_0 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:52.160 Found net devices under 0000:af:00.1: cvl_0_1 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.160 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.161 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:21:52.161 00:21:52.161 --- 10.0.0.2 ping statistics --- 00:21:52.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.161 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:52.161 00:21:52.161 --- 10.0.0.1 ping statistics --- 00:21:52.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.161 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.161 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=165119 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 165119 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 165119 ']' 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.420 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.420 [2024-12-10 05:47:10.192981] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:52.420 [2024-12-10 05:47:10.193024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.420 [2024-12-10 05:47:10.273140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.420 [2024-12-10 05:47:10.313071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.420 [2024-12-10 05:47:10.313106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.420 [2024-12-10 05:47:10.313117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.420 [2024-12-10 05:47:10.313123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.420 [2024-12-10 05:47:10.313128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.420 [2024-12-10 05:47:10.313671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Knv 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Knv 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Knv 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Knv 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.355 [2024-12-10 05:47:11.224293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.355 [2024-12-10 05:47:11.240292] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.355 [2024-12-10 05:47:11.240507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.355 malloc0 00:21:53.355 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=165362 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 165362 /var/tmp/bdevperf.sock 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 165362 ']' 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.614 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:53.614 [2024-12-10 05:47:11.370871] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:21:53.614 [2024-12-10 05:47:11.370919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165362 ] 00:21:53.614 [2024-12-10 05:47:11.441097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.614 [2024-12-10 05:47:11.480137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.549 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.549 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:54.549 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Knv 00:21:54.549 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:54.808 [2024-12-10 05:47:12.577284] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.808 TLSTESTn1 00:21:54.808 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.808 Running I/O for 10 seconds... 00:21:56.916 5440.00 IOPS, 21.25 MiB/s [2024-12-10T04:47:15.809Z] 5422.00 IOPS, 21.18 MiB/s [2024-12-10T04:47:17.185Z] 5483.00 IOPS, 21.42 MiB/s [2024-12-10T04:47:18.120Z] 5518.75 IOPS, 21.56 MiB/s [2024-12-10T04:47:19.056Z] 5522.80 IOPS, 21.57 MiB/s [2024-12-10T04:47:19.992Z] 5540.33 IOPS, 21.64 MiB/s [2024-12-10T04:47:20.928Z] 5552.14 IOPS, 21.69 MiB/s [2024-12-10T04:47:21.865Z] 5555.75 IOPS, 21.70 MiB/s [2024-12-10T04:47:22.800Z] 5568.78 IOPS, 21.75 MiB/s [2024-12-10T04:47:22.800Z] 5556.40 IOPS, 21.70 MiB/s 00:22:04.841 Latency(us) 00:22:04.841 [2024-12-10T04:47:22.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.841 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:04.841 Verification LBA range: start 0x0 length 0x2000 00:22:04.841 TLSTESTn1 : 10.01 5561.06 21.72 0.00 0.00 22983.30 6023.07 24841.26 00:22:04.841 [2024-12-10T04:47:22.800Z] =================================================================================================================== 00:22:04.841 [2024-12-10T04:47:22.800Z] Total : 5561.06 21.72 0.00 0.00 22983.30 6023.07 24841.26 00:22:05.100 { 00:22:05.100 "results": [ 00:22:05.100 { 00:22:05.100 "job": "TLSTESTn1", 00:22:05.100 "core_mask": "0x4", 00:22:05.100 "workload": "verify", 00:22:05.100 "status": "finished", 00:22:05.100 "verify_range": { 00:22:05.100 "start": 0, 00:22:05.100 "length": 8192 00:22:05.100 }, 00:22:05.100 "queue_depth": 128, 00:22:05.100 "io_size": 4096, 00:22:05.100 "runtime": 10.014091, 00:22:05.100 "iops": 5561.063904851673, 00:22:05.100 "mibps": 21.72290587832685, 00:22:05.100 "io_failed": 0, 00:22:05.100 "io_timeout": 0, 00:22:05.100 "avg_latency_us": 22983.300453248437, 00:22:05.100 "min_latency_us": 6023.070476190476, 00:22:05.100 "max_latency_us": 24841.26476190476 00:22:05.100 } 00:22:05.100 ], 00:22:05.100 "core_count": 1 00:22:05.100 } 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:05.100 nvmf_trace.0 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 165362 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 165362 ']' 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 165362 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165362 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165362' 00:22:05.100 killing process with pid 165362 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 165362 00:22:05.100 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.100 00:22:05.100 Latency(us) 00:22:05.100 [2024-12-10T04:47:23.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.100 [2024-12-10T04:47:23.059Z] =================================================================================================================== 00:22:05.100 [2024-12-10T04:47:23.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.100 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 165362 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.359 rmmod nvme_tcp 00:22:05.359 rmmod nvme_fabrics 00:22:05.359 rmmod nvme_keyring 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 165119 ']' 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 165119 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 165119 ']' 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 165119 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165119 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165119' 00:22:05.359 killing process with pid 165119 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 165119 00:22:05.359 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 165119 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.619 05:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.524 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.524 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Knv 00:22:07.524 00:22:07.524 real 0m22.469s 00:22:07.524 user 0m23.628s 00:22:07.524 sys 0m10.283s 00:22:07.524 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.524 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:07.524 ************************************ 00:22:07.524 END TEST nvmf_fips 00:22:07.524 ************************************ 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.782 ************************************ 00:22:07.782 START TEST nvmf_control_msg_list 00:22:07.782 ************************************ 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:07.782 * Looking for test storage... 00:22:07.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.782 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.783 --rc genhtml_branch_coverage=1 00:22:07.783 --rc genhtml_function_coverage=1 00:22:07.783 --rc genhtml_legend=1 00:22:07.783 --rc geninfo_all_blocks=1 00:22:07.783 --rc geninfo_unexecuted_blocks=1 00:22:07.783 00:22:07.783 ' 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.783 --rc genhtml_branch_coverage=1 00:22:07.783 --rc genhtml_function_coverage=1 00:22:07.783 --rc genhtml_legend=1 00:22:07.783 --rc geninfo_all_blocks=1 00:22:07.783 --rc geninfo_unexecuted_blocks=1 00:22:07.783 00:22:07.783 ' 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.783 --rc genhtml_branch_coverage=1 00:22:07.783 --rc genhtml_function_coverage=1 00:22:07.783 --rc genhtml_legend=1 00:22:07.783 --rc geninfo_all_blocks=1 00:22:07.783 --rc geninfo_unexecuted_blocks=1 00:22:07.783 00:22:07.783 ' 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.783 --rc genhtml_branch_coverage=1 00:22:07.783 --rc genhtml_function_coverage=1 00:22:07.783 --rc genhtml_legend=1 00:22:07.783 --rc geninfo_all_blocks=1 00:22:07.783 --rc geninfo_unexecuted_blocks=1 00:22:07.783 00:22:07.783 ' 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.783 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.042 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.612 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:14.613 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:14.613 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:14.613 Found net devices under 0000:af:00.0: cvl_0_0 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:14.613 Found net devices under 0000:af:00.1: cvl_0_1 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:22:14.613 00:22:14.613 --- 10.0.0.2 ping statistics --- 00:22:14.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.613 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:22:14.613 00:22:14.613 --- 10.0.0.1 ping statistics --- 00:22:14.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.613 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.613 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=171191 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 171191 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 171191 ']' 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.873 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:14.873 [2024-12-10 05:47:32.634363] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:22:14.873 [2024-12-10 05:47:32.634405] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.873 [2024-12-10 05:47:32.719264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.873 [2024-12-10 05:47:32.758980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.873 [2024-12-10 05:47:32.759015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.873 [2024-12-10 05:47:32.759022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.873 [2024-12-10 05:47:32.759028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.873 [2024-12-10 05:47:32.759033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.873 [2024-12-10 05:47:32.759559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 [2024-12-10 05:47:33.525312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 Malloc0 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 [2024-12-10 05:47:33.573734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=171409 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=171411 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=171413 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 171409 00:22:15.808 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.808 [2024-12-10 05:47:33.658313] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:15.808 [2024-12-10 05:47:33.658485] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:15.808 [2024-12-10 05:47:33.668104] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:17.179 Initializing NVMe Controllers 00:22:17.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:17.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:17.179 Initialization complete. Launching workers. 00:22:17.179 ======================================================== 00:22:17.179 Latency(us) 00:22:17.179 Device Information : IOPS MiB/s Average min max 00:22:17.179 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6672.95 26.07 149.52 122.44 364.56 00:22:17.179 ======================================================== 00:22:17.179 Total : 6672.95 26.07 149.52 122.44 364.56 00:22:17.179 00:22:17.179 Initializing NVMe Controllers 00:22:17.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:17.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:17.179 Initialization complete. Launching workers. 00:22:17.180 ======================================================== 00:22:17.180 Latency(us) 00:22:17.180 Device Information : IOPS MiB/s Average min max 00:22:17.180 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41101.76 40814.66 42002.62 00:22:17.180 ======================================================== 00:22:17.180 Total : 25.00 0.10 41101.76 40814.66 42002.62 00:22:17.180 00:22:17.180 Initializing NVMe Controllers 00:22:17.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:17.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:17.180 Initialization complete. Launching workers. 00:22:17.180 ======================================================== 00:22:17.180 Latency(us) 00:22:17.180 Device Information : IOPS MiB/s Average min max 00:22:17.180 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6628.99 25.89 150.52 127.58 370.38 00:22:17.180 ======================================================== 00:22:17.180 Total : 6628.99 25.89 150.52 127.58 370.38 00:22:17.180 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 171411 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 171413 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.180 rmmod nvme_tcp 00:22:17.180 rmmod nvme_fabrics 00:22:17.180 rmmod nvme_keyring 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 171191 ']' 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 171191 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 171191 ']' 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 171191 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171191 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171191' 00:22:17.180 killing process with pid 171191 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 171191 00:22:17.180 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 171191 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.438 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.342 00:22:19.342 real 0m11.682s 00:22:19.342 user 0m7.740s 00:22:19.342 sys 0m6.049s 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:19.342 ************************************ 00:22:19.342 END TEST nvmf_control_msg_list 00:22:19.342 ************************************ 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.342 05:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.602 ************************************ 00:22:19.602 START TEST nvmf_wait_for_buf 00:22:19.602 ************************************ 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:19.602 * Looking for test storage... 00:22:19.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.602 --rc genhtml_branch_coverage=1 00:22:19.602 --rc genhtml_function_coverage=1 00:22:19.602 --rc genhtml_legend=1 00:22:19.602 --rc geninfo_all_blocks=1 00:22:19.602 --rc geninfo_unexecuted_blocks=1 00:22:19.602 00:22:19.602 ' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.602 --rc genhtml_branch_coverage=1 00:22:19.602 --rc genhtml_function_coverage=1 00:22:19.602 --rc genhtml_legend=1 00:22:19.602 --rc geninfo_all_blocks=1 00:22:19.602 --rc geninfo_unexecuted_blocks=1 00:22:19.602 00:22:19.602 ' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.602 --rc genhtml_branch_coverage=1 00:22:19.602 --rc genhtml_function_coverage=1 00:22:19.602 --rc genhtml_legend=1 00:22:19.602 --rc geninfo_all_blocks=1 00:22:19.602 --rc geninfo_unexecuted_blocks=1 00:22:19.602 00:22:19.602 ' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.602 --rc genhtml_branch_coverage=1 00:22:19.602 --rc genhtml_function_coverage=1 00:22:19.602 --rc genhtml_legend=1 00:22:19.602 --rc geninfo_all_blocks=1 00:22:19.602 --rc geninfo_unexecuted_blocks=1 00:22:19.602 00:22:19.602 ' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.602 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.603 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:26.171 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:26.171 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:26.171 Found net devices under 0000:af:00.0: cvl_0_0 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.171 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:26.172 Found net devices under 0000:af:00.1: cvl_0_1 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.172 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.172 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.172 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.172 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.172 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.172 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.172 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:22:26.431 00:22:26.431 --- 10.0.0.2 ping statistics --- 00:22:26.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.431 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:26.431 00:22:26.431 --- 10.0.0.1 ping statistics --- 00:22:26.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.431 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=175454 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 175454 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 175454 ']' 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.431 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.431 [2024-12-10 05:47:44.307797] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:22:26.431 [2024-12-10 05:47:44.307841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.690 [2024-12-10 05:47:44.389374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.690 [2024-12-10 05:47:44.428416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.690 [2024-12-10 05:47:44.428450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.690 [2024-12-10 05:47:44.428457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.690 [2024-12-10 05:47:44.428463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.690 [2024-12-10 05:47:44.428468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.690 [2024-12-10 05:47:44.429005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 Malloc0 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 [2024-12-10 05:47:44.586451] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.690 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.691 [2024-12-10 05:47:44.614638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.691 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.691 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.948 [2024-12-10 05:47:44.710299] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:28.324 Initializing NVMe Controllers 00:22:28.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:28.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:28.324 Initialization complete. Launching workers. 00:22:28.324 ======================================================== 00:22:28.324 Latency(us) 00:22:28.324 Device Information : IOPS MiB/s Average min max 00:22:28.324 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33539.39 29962.91 71083.66 00:22:28.324 ======================================================== 00:22:28.324 Total : 124.00 15.50 33539.39 29962.91 71083.66 00:22:28.324 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.324 rmmod nvme_tcp 00:22:28.324 rmmod nvme_fabrics 00:22:28.324 rmmod nvme_keyring 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 175454 ']' 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 175454 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 175454 ']' 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 175454 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.324 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175454 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175454' 00:22:28.584 killing process with pid 175454 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 175454 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 175454 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.584 05:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.119 00:22:31.119 real 0m11.207s 00:22:31.119 user 0m4.146s 00:22:31.119 sys 0m5.528s 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:31.119 ************************************ 00:22:31.119 END TEST nvmf_wait_for_buf 00:22:31.119 ************************************ 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.119 05:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:37.689 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:37.689 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:37.689 Found net devices under 0000:af:00.0: cvl_0_0 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:37.689 Found net devices under 0000:af:00.1: cvl_0_1 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.689 05:47:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.689 ************************************ 00:22:37.689 START TEST nvmf_perf_adq 00:22:37.689 ************************************ 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:37.689 * Looking for test storage... 00:22:37.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.689 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.689 --rc genhtml_branch_coverage=1 00:22:37.689 --rc genhtml_function_coverage=1 00:22:37.689 --rc genhtml_legend=1 00:22:37.690 --rc geninfo_all_blocks=1 00:22:37.690 --rc geninfo_unexecuted_blocks=1 00:22:37.690 00:22:37.690 ' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.690 --rc genhtml_branch_coverage=1 00:22:37.690 --rc genhtml_function_coverage=1 00:22:37.690 --rc genhtml_legend=1 00:22:37.690 --rc geninfo_all_blocks=1 00:22:37.690 --rc geninfo_unexecuted_blocks=1 00:22:37.690 00:22:37.690 ' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.690 --rc genhtml_branch_coverage=1 00:22:37.690 --rc genhtml_function_coverage=1 00:22:37.690 --rc genhtml_legend=1 00:22:37.690 --rc geninfo_all_blocks=1 00:22:37.690 --rc geninfo_unexecuted_blocks=1 00:22:37.690 00:22:37.690 ' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.690 --rc genhtml_branch_coverage=1 00:22:37.690 --rc genhtml_function_coverage=1 00:22:37.690 --rc genhtml_legend=1 00:22:37.690 --rc geninfo_all_blocks=1 00:22:37.690 --rc geninfo_unexecuted_blocks=1 00:22:37.690 00:22:37.690 ' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.690 05:47:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.255 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.255 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.255 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.255 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.255 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:44.256 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:44.256 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:44.256 Found net devices under 0000:af:00.0: cvl_0_0 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:44.256 Found net devices under 0000:af:00.1: cvl_0_1 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:44.256 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:45.192 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:47.725 05:48:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.995 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:52.996 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:52.996 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:52.996 Found net devices under 0000:af:00.0: cvl_0_0 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:52.996 Found net devices under 0000:af:00.1: cvl_0_1 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:22:52.996 00:22:52.996 --- 10.0.0.2 ping statistics --- 00:22:52.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.996 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:52.996 00:22:52.996 --- 10.0.0.1 ping statistics --- 00:22:52.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.996 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.996 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=185342 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 185342 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 185342 ']' 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.997 05:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.997 [2024-12-10 05:48:10.652887] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:22:52.997 [2024-12-10 05:48:10.652930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.997 [2024-12-10 05:48:10.736470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.997 [2024-12-10 05:48:10.776962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.997 [2024-12-10 05:48:10.777003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.997 [2024-12-10 05:48:10.777010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.997 [2024-12-10 05:48:10.777016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.997 [2024-12-10 05:48:10.777021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.997 [2024-12-10 05:48:10.778484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.997 [2024-12-10 05:48:10.778592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.997 [2024-12-10 05:48:10.778718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.997 [2024-12-10 05:48:10.778720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.561 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.561 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:53.561 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.561 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.561 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 [2024-12-10 05:48:11.663055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 Malloc1 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.819 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.820 [2024-12-10 05:48:11.726661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=185461 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:53.820 05:48:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:56.340 "tick_rate": 2100000000, 00:22:56.340 "poll_groups": [ 00:22:56.340 { 00:22:56.340 "name": "nvmf_tgt_poll_group_000", 00:22:56.340 "admin_qpairs": 1, 00:22:56.340 "io_qpairs": 1, 00:22:56.340 "current_admin_qpairs": 1, 00:22:56.340 "current_io_qpairs": 1, 00:22:56.340 "pending_bdev_io": 0, 00:22:56.340 "completed_nvme_io": 20379, 00:22:56.340 "transports": [ 00:22:56.340 { 00:22:56.340 "trtype": "TCP" 00:22:56.340 } 00:22:56.340 ] 00:22:56.340 }, 00:22:56.340 { 00:22:56.340 "name": "nvmf_tgt_poll_group_001", 00:22:56.340 "admin_qpairs": 0, 00:22:56.340 "io_qpairs": 1, 00:22:56.340 "current_admin_qpairs": 0, 00:22:56.340 "current_io_qpairs": 1, 00:22:56.340 "pending_bdev_io": 0, 00:22:56.340 "completed_nvme_io": 20506, 00:22:56.340 "transports": [ 00:22:56.340 { 00:22:56.340 "trtype": "TCP" 00:22:56.340 } 00:22:56.340 ] 00:22:56.340 }, 00:22:56.340 { 00:22:56.340 "name": "nvmf_tgt_poll_group_002", 00:22:56.340 "admin_qpairs": 0, 00:22:56.340 "io_qpairs": 1, 00:22:56.340 "current_admin_qpairs": 0, 00:22:56.340 "current_io_qpairs": 1, 00:22:56.340 "pending_bdev_io": 0, 00:22:56.340 "completed_nvme_io": 20086, 00:22:56.340 "transports": [ 00:22:56.340 { 00:22:56.340 "trtype": "TCP" 00:22:56.340 } 00:22:56.340 ] 00:22:56.340 }, 00:22:56.340 { 00:22:56.340 "name": "nvmf_tgt_poll_group_003", 00:22:56.340 "admin_qpairs": 0, 00:22:56.340 "io_qpairs": 1, 00:22:56.340 "current_admin_qpairs": 0, 00:22:56.340 "current_io_qpairs": 1, 00:22:56.340 "pending_bdev_io": 0, 00:22:56.340 "completed_nvme_io": 20319, 00:22:56.340 "transports": [ 00:22:56.340 { 00:22:56.340 "trtype": "TCP" 00:22:56.340 } 00:22:56.340 ] 00:22:56.340 } 00:22:56.340 ] 00:22:56.340 }' 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:56.340 05:48:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 185461 00:23:04.508 Initializing NVMe Controllers 00:23:04.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:04.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:04.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:04.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:04.508 Initialization complete. Launching workers. 00:23:04.508 ======================================================== 00:23:04.508 Latency(us) 00:23:04.508 Device Information : IOPS MiB/s Average min max 00:23:04.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10714.20 41.85 5973.72 2027.32 10139.89 00:23:04.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10956.49 42.80 5842.32 2218.10 10167.84 00:23:04.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10778.80 42.10 5937.45 1679.54 10580.36 00:23:04.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10794.70 42.17 5929.02 2417.11 10057.52 00:23:04.508 ======================================================== 00:23:04.508 Total : 43244.18 168.92 5920.23 1679.54 10580.36 00:23:04.508 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.508 rmmod nvme_tcp 00:23:04.508 rmmod nvme_fabrics 00:23:04.508 rmmod nvme_keyring 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 185342 ']' 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 185342 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 185342 ']' 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 185342 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.508 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185342 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185342' 00:23:04.508 killing process with pid 185342 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 185342 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 185342 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.508 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.421 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.421 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:06.421 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:06.421 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:07.799 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:10.333 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:15.606 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:15.606 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.606 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.606 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.606 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.606 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:15.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:15.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:15.607 Found net devices under 0000:af:00.0: cvl_0_0 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:15.607 Found net devices under 0000:af:00.1: cvl_0_1 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.607 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.607 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.829 ms 00:23:15.607 00:23:15.607 --- 10.0.0.2 ping statistics --- 00:23:15.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.607 rtt min/avg/max/mdev = 0.829/0.829/0.829/0.000 ms 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:23:15.608 00:23:15.608 --- 10.0.0.1 ping statistics --- 00:23:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.608 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:15.608 net.core.busy_poll = 1 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:15.608 net.core.busy_read = 1 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=189454 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 189454 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 189454 ']' 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.608 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.866 [2024-12-10 05:48:33.592224] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:15.866 [2024-12-10 05:48:33.592269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.866 [2024-12-10 05:48:33.677010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.866 [2024-12-10 05:48:33.718269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.866 [2024-12-10 05:48:33.718304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.866 [2024-12-10 05:48:33.718311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.866 [2024-12-10 05:48:33.718317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.866 [2024-12-10 05:48:33.718322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.866 [2024-12-10 05:48:33.719775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.866 [2024-12-10 05:48:33.719813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.866 [2024-12-10 05:48:33.719918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.866 [2024-12-10 05:48:33.719919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.799 [2024-12-10 05:48:34.620094] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.799 Malloc1 00:23:16.799 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.800 [2024-12-10 05:48:34.688982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=189703 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:16.800 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:19.325 "tick_rate": 2100000000, 00:23:19.325 "poll_groups": [ 00:23:19.325 { 00:23:19.325 "name": "nvmf_tgt_poll_group_000", 00:23:19.325 "admin_qpairs": 1, 00:23:19.325 "io_qpairs": 1, 00:23:19.325 "current_admin_qpairs": 1, 00:23:19.325 "current_io_qpairs": 1, 00:23:19.325 "pending_bdev_io": 0, 00:23:19.325 "completed_nvme_io": 27338, 00:23:19.325 "transports": [ 00:23:19.325 { 00:23:19.325 "trtype": "TCP" 00:23:19.325 } 00:23:19.325 ] 00:23:19.325 }, 00:23:19.325 { 00:23:19.325 "name": "nvmf_tgt_poll_group_001", 00:23:19.325 "admin_qpairs": 0, 00:23:19.325 "io_qpairs": 3, 00:23:19.325 "current_admin_qpairs": 0, 00:23:19.325 "current_io_qpairs": 3, 00:23:19.325 "pending_bdev_io": 0, 00:23:19.325 "completed_nvme_io": 30085, 00:23:19.325 "transports": [ 00:23:19.325 { 00:23:19.325 "trtype": "TCP" 00:23:19.325 } 00:23:19.325 ] 00:23:19.325 }, 00:23:19.325 { 00:23:19.325 "name": "nvmf_tgt_poll_group_002", 00:23:19.325 "admin_qpairs": 0, 00:23:19.325 "io_qpairs": 0, 00:23:19.325 "current_admin_qpairs": 0, 00:23:19.325 "current_io_qpairs": 0, 00:23:19.325 "pending_bdev_io": 0, 00:23:19.325 "completed_nvme_io": 0, 00:23:19.325 "transports": [ 00:23:19.325 { 00:23:19.325 "trtype": "TCP" 00:23:19.325 } 00:23:19.325 ] 00:23:19.325 }, 00:23:19.325 { 00:23:19.325 "name": "nvmf_tgt_poll_group_003", 00:23:19.325 "admin_qpairs": 0, 00:23:19.325 "io_qpairs": 0, 00:23:19.325 "current_admin_qpairs": 0, 00:23:19.325 "current_io_qpairs": 0, 00:23:19.325 "pending_bdev_io": 0, 00:23:19.325 "completed_nvme_io": 0, 00:23:19.325 "transports": [ 00:23:19.325 { 00:23:19.325 "trtype": "TCP" 00:23:19.325 } 00:23:19.325 ] 00:23:19.325 } 00:23:19.325 ] 00:23:19.325 }' 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:19.325 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 189703 00:23:27.426 Initializing NVMe Controllers 00:23:27.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:27.426 Initialization complete. Launching workers. 00:23:27.426 ======================================================== 00:23:27.426 Latency(us) 00:23:27.426 Device Information : IOPS MiB/s Average min max 00:23:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14941.40 58.36 4282.74 1059.16 45566.27 00:23:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4968.00 19.41 12925.08 1721.61 60203.96 00:23:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5593.90 21.85 11470.83 1463.71 57890.41 00:23:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5233.70 20.44 12266.29 1774.53 59651.69 00:23:27.426 ======================================================== 00:23:27.426 Total : 30737.00 120.07 8347.16 1059.16 60203.96 00:23:27.426 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.426 rmmod nvme_tcp 00:23:27.426 rmmod nvme_fabrics 00:23:27.426 rmmod nvme_keyring 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:27.426 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 189454 ']' 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 189454 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 189454 ']' 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 189454 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.427 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 189454 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 189454' 00:23:27.427 killing process with pid 189454 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 189454 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 189454 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.427 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.331 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.331 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:29.331 00:23:29.331 real 0m52.265s 00:23:29.331 user 2m49.961s 00:23:29.331 sys 0m10.889s 00:23:29.331 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.331 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.331 ************************************ 00:23:29.331 END TEST nvmf_perf_adq 00:23:29.331 ************************************ 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.590 ************************************ 00:23:29.590 START TEST nvmf_shutdown 00:23:29.590 ************************************ 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:29.590 * Looking for test storage... 00:23:29.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.590 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.591 --rc genhtml_branch_coverage=1 00:23:29.591 --rc genhtml_function_coverage=1 00:23:29.591 --rc genhtml_legend=1 00:23:29.591 --rc geninfo_all_blocks=1 00:23:29.591 --rc geninfo_unexecuted_blocks=1 00:23:29.591 00:23:29.591 ' 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.591 --rc genhtml_branch_coverage=1 00:23:29.591 --rc genhtml_function_coverage=1 00:23:29.591 --rc genhtml_legend=1 00:23:29.591 --rc geninfo_all_blocks=1 00:23:29.591 --rc geninfo_unexecuted_blocks=1 00:23:29.591 00:23:29.591 ' 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.591 --rc genhtml_branch_coverage=1 00:23:29.591 --rc genhtml_function_coverage=1 00:23:29.591 --rc genhtml_legend=1 00:23:29.591 --rc geninfo_all_blocks=1 00:23:29.591 --rc geninfo_unexecuted_blocks=1 00:23:29.591 00:23:29.591 ' 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.591 --rc genhtml_branch_coverage=1 00:23:29.591 --rc genhtml_function_coverage=1 00:23:29.591 --rc genhtml_legend=1 00:23:29.591 --rc geninfo_all_blocks=1 00:23:29.591 --rc geninfo_unexecuted_blocks=1 00:23:29.591 00:23:29.591 ' 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.591 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.850 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:29.850 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:29.850 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.850 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.850 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.850 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:29.851 ************************************ 00:23:29.851 START TEST nvmf_shutdown_tc1 00:23:29.851 ************************************ 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.851 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:36.424 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:36.424 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:36.424 Found net devices under 0000:af:00.0: cvl_0_0 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:36.424 Found net devices under 0000:af:00.1: cvl_0_1 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.424 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.683 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:23:36.684 00:23:36.684 --- 10.0.0.2 ping statistics --- 00:23:36.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.684 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:36.684 00:23:36.684 --- 10.0.0.1 ping statistics --- 00:23:36.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.684 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=195374 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 195374 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 195374 ']' 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.684 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.684 [2024-12-10 05:48:54.544918] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:36.684 [2024-12-10 05:48:54.544962] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.684 [2024-12-10 05:48:54.626760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.941 [2024-12-10 05:48:54.668600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.941 [2024-12-10 05:48:54.668633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.941 [2024-12-10 05:48:54.668640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.941 [2024-12-10 05:48:54.668646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.941 [2024-12-10 05:48:54.668651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.941 [2024-12-10 05:48:54.670173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.941 [2024-12-10 05:48:54.670284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.941 [2024-12-10 05:48:54.670388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.941 [2024-12-10 05:48:54.670389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:37.504 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.504 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:37.504 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.504 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.504 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.504 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.505 [2024-12-10 05:48:55.413659] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.505 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.762 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.762 Malloc1 00:23:37.762 [2024-12-10 05:48:55.528171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.762 Malloc2 00:23:37.762 Malloc3 00:23:37.762 Malloc4 00:23:37.762 Malloc5 00:23:38.019 Malloc6 00:23:38.019 Malloc7 00:23:38.019 Malloc8 00:23:38.019 Malloc9 00:23:38.019 Malloc10 00:23:38.019 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.019 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:38.019 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.019 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.019 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=195657 00:23:38.019 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 195657 /var/tmp/bdevperf.sock 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 195657 ']' 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.020 { 00:23:38.020 "params": { 00:23:38.020 "name": "Nvme$subsystem", 00:23:38.020 "trtype": "$TEST_TRANSPORT", 00:23:38.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.020 "adrfam": "ipv4", 00:23:38.020 "trsvcid": "$NVMF_PORT", 00:23:38.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.020 "hdgst": ${hdgst:-false}, 00:23:38.020 "ddgst": ${ddgst:-false} 00:23:38.020 }, 00:23:38.020 "method": "bdev_nvme_attach_controller" 00:23:38.020 } 00:23:38.020 EOF 00:23:38.020 )") 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.020 { 00:23:38.020 "params": { 00:23:38.020 "name": "Nvme$subsystem", 00:23:38.020 "trtype": "$TEST_TRANSPORT", 00:23:38.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.020 "adrfam": "ipv4", 00:23:38.020 "trsvcid": "$NVMF_PORT", 00:23:38.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.020 "hdgst": ${hdgst:-false}, 00:23:38.020 "ddgst": ${ddgst:-false} 00:23:38.020 }, 00:23:38.020 "method": "bdev_nvme_attach_controller" 00:23:38.020 } 00:23:38.020 EOF 00:23:38.020 )") 00:23:38.020 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.278 "hdgst": ${hdgst:-false}, 00:23:38.278 "ddgst": ${ddgst:-false} 00:23:38.278 }, 00:23:38.278 "method": "bdev_nvme_attach_controller" 00:23:38.278 } 00:23:38.278 EOF 00:23:38.278 )") 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.278 "hdgst": ${hdgst:-false}, 00:23:38.278 "ddgst": ${ddgst:-false} 00:23:38.278 }, 00:23:38.278 "method": "bdev_nvme_attach_controller" 00:23:38.278 } 00:23:38.278 EOF 00:23:38.278 )") 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.278 "hdgst": ${hdgst:-false}, 00:23:38.278 "ddgst": ${ddgst:-false} 00:23:38.278 }, 00:23:38.278 "method": "bdev_nvme_attach_controller" 00:23:38.278 } 00:23:38.278 EOF 00:23:38.278 )") 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.278 "hdgst": ${hdgst:-false}, 00:23:38.278 "ddgst": ${ddgst:-false} 00:23:38.278 }, 00:23:38.278 "method": "bdev_nvme_attach_controller" 00:23:38.278 } 00:23:38.278 EOF 00:23:38.278 )") 00:23:38.278 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 [2024-12-10 05:48:56.001789] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:38.278 [2024-12-10 05:48:56.001839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.278 "hdgst": ${hdgst:-false}, 00:23:38.278 "ddgst": ${ddgst:-false} 00:23:38.278 }, 00:23:38.278 "method": "bdev_nvme_attach_controller" 00:23:38.278 } 00:23:38.278 EOF 00:23:38.278 )") 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.278 "hdgst": ${hdgst:-false}, 00:23:38.278 "ddgst": ${ddgst:-false} 00:23:38.278 }, 00:23:38.278 "method": "bdev_nvme_attach_controller" 00:23:38.278 } 00:23:38.278 EOF 00:23:38.278 )") 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.278 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.278 { 00:23:38.278 "params": { 00:23:38.278 "name": "Nvme$subsystem", 00:23:38.278 "trtype": "$TEST_TRANSPORT", 00:23:38.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.278 "adrfam": "ipv4", 00:23:38.278 "trsvcid": "$NVMF_PORT", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.279 "hdgst": ${hdgst:-false}, 00:23:38.279 "ddgst": ${ddgst:-false} 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 } 00:23:38.279 EOF 00:23:38.279 )") 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.279 { 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme$subsystem", 00:23:38.279 "trtype": "$TEST_TRANSPORT", 00:23:38.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "$NVMF_PORT", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.279 "hdgst": ${hdgst:-false}, 00:23:38.279 "ddgst": ${ddgst:-false} 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 } 00:23:38.279 EOF 00:23:38.279 )") 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:38.279 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme1", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme2", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme3", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme4", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme5", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme6", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme7", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme8", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme9", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 },{ 00:23:38.279 "params": { 00:23:38.279 "name": "Nvme10", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "4420", 00:23:38.279 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:38.279 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:38.279 "hdgst": false, 00:23:38.279 "ddgst": false 00:23:38.279 }, 00:23:38.279 "method": "bdev_nvme_attach_controller" 00:23:38.279 }' 00:23:38.279 [2024-12-10 05:48:56.082439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.279 [2024-12-10 05:48:56.121981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 195657 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:40.175 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:41.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 195657 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 195374 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 [2024-12-10 05:48:58.932182] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:41.108 [2024-12-10 05:48:58.932240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196141 ] 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.108 "hdgst": ${hdgst:-false}, 00:23:41.108 "ddgst": ${ddgst:-false} 00:23:41.108 }, 00:23:41.108 "method": "bdev_nvme_attach_controller" 00:23:41.108 } 00:23:41.108 EOF 00:23:41.108 )") 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.108 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.108 { 00:23:41.108 "params": { 00:23:41.108 "name": "Nvme$subsystem", 00:23:41.108 "trtype": "$TEST_TRANSPORT", 00:23:41.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.108 "adrfam": "ipv4", 00:23:41.108 "trsvcid": "$NVMF_PORT", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.109 "hdgst": ${hdgst:-false}, 00:23:41.109 "ddgst": ${ddgst:-false} 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 } 00:23:41.109 EOF 00:23:41.109 )") 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.109 { 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme$subsystem", 00:23:41.109 "trtype": "$TEST_TRANSPORT", 00:23:41.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "$NVMF_PORT", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.109 "hdgst": ${hdgst:-false}, 00:23:41.109 "ddgst": ${ddgst:-false} 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 } 00:23:41.109 EOF 00:23:41.109 )") 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:41.109 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme1", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme2", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme3", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme4", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme5", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme6", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme7", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme8", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme9", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 },{ 00:23:41.109 "params": { 00:23:41.109 "name": "Nvme10", 00:23:41.109 "trtype": "tcp", 00:23:41.109 "traddr": "10.0.0.2", 00:23:41.109 "adrfam": "ipv4", 00:23:41.109 "trsvcid": "4420", 00:23:41.109 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:41.109 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:41.109 "hdgst": false, 00:23:41.109 "ddgst": false 00:23:41.109 }, 00:23:41.109 "method": "bdev_nvme_attach_controller" 00:23:41.109 }' 00:23:41.109 [2024-12-10 05:48:59.015303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.109 [2024-12-10 05:48:59.055631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.480 Running I/O for 1 seconds... 00:23:43.671 2253.00 IOPS, 140.81 MiB/s 00:23:43.671 Latency(us) 00:23:43.671 [2024-12-10T04:49:01.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme1n1 : 1.14 281.93 17.62 0.00 0.00 222487.31 15104.49 211712.49 00:23:43.671 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme2n1 : 1.05 243.20 15.20 0.00 0.00 256340.85 16602.45 224694.86 00:23:43.671 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme3n1 : 1.12 286.84 17.93 0.00 0.00 214559.21 16477.62 213709.78 00:23:43.671 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme4n1 : 1.13 287.91 17.99 0.00 0.00 210195.21 2933.52 211712.49 00:23:43.671 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme5n1 : 1.15 278.25 17.39 0.00 0.00 215219.05 17101.78 213709.78 00:23:43.671 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme6n1 : 1.14 289.21 18.08 0.00 0.00 201755.36 13606.52 199728.76 00:23:43.671 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme7n1 : 1.14 283.72 17.73 0.00 0.00 204812.83 1817.84 230686.72 00:23:43.671 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme8n1 : 1.13 282.91 17.68 0.00 0.00 202239.80 17975.59 225693.50 00:23:43.671 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme9n1 : 1.15 277.27 17.33 0.00 0.00 203756.89 12732.71 223696.21 00:23:43.671 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.671 Verification LBA range: start 0x0 length 0x400 00:23:43.671 Nvme10n1 : 1.15 278.83 17.43 0.00 0.00 199347.83 17725.93 230686.72 00:23:43.671 [2024-12-10T04:49:01.630Z] =================================================================================================================== 00:23:43.671 [2024-12-10T04:49:01.630Z] Total : 2790.07 174.38 0.00 0.00 212146.72 1817.84 230686.72 00:23:43.671 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:43.671 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.930 rmmod nvme_tcp 00:23:43.930 rmmod nvme_fabrics 00:23:43.930 rmmod nvme_keyring 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 195374 ']' 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 195374 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 195374 ']' 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 195374 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 195374 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 195374' 00:23:43.930 killing process with pid 195374 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 195374 00:23:43.930 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 195374 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.189 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.723 00:23:46.723 real 0m16.609s 00:23:46.723 user 0m35.812s 00:23:46.723 sys 0m6.490s 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 ************************************ 00:23:46.723 END TEST nvmf_shutdown_tc1 00:23:46.723 ************************************ 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 ************************************ 00:23:46.723 START TEST nvmf_shutdown_tc2 00:23:46.723 ************************************ 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:46.723 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:46.723 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:46.723 Found net devices under 0000:af:00.0: cvl_0_0 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.723 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:46.724 Found net devices under 0000:af:00.1: cvl_0_1 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:23:46.724 00:23:46.724 --- 10.0.0.2 ping statistics --- 00:23:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.724 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:46.724 00:23:46.724 --- 10.0.0.1 ping statistics --- 00:23:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.724 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=197157 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 197157 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 197157 ']' 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.724 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.724 [2024-12-10 05:49:04.644404] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:46.724 [2024-12-10 05:49:04.644448] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.983 [2024-12-10 05:49:04.729460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.983 [2024-12-10 05:49:04.769620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.983 [2024-12-10 05:49:04.769657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.983 [2024-12-10 05:49:04.769664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.983 [2024-12-10 05:49:04.769670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.983 [2024-12-10 05:49:04.769675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.983 [2024-12-10 05:49:04.771178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.983 [2024-12-10 05:49:04.771274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.983 [2024-12-10 05:49:04.771313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.983 [2024-12-10 05:49:04.771314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:47.548 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.548 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:47.548 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.548 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.548 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.806 [2024-12-10 05:49:05.525378] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.806 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.806 Malloc1 00:23:47.806 [2024-12-10 05:49:05.639034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.806 Malloc2 00:23:47.806 Malloc3 00:23:47.806 Malloc4 00:23:48.064 Malloc5 00:23:48.064 Malloc6 00:23:48.064 Malloc7 00:23:48.064 Malloc8 00:23:48.064 Malloc9 00:23:48.064 Malloc10 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=197437 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 197437 /var/tmp/bdevperf.sock 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 197437 ']' 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.322 { 00:23:48.322 "params": { 00:23:48.322 "name": "Nvme$subsystem", 00:23:48.322 "trtype": "$TEST_TRANSPORT", 00:23:48.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.322 "adrfam": "ipv4", 00:23:48.322 "trsvcid": "$NVMF_PORT", 00:23:48.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.322 "hdgst": ${hdgst:-false}, 00:23:48.322 "ddgst": ${ddgst:-false} 00:23:48.322 }, 00:23:48.322 "method": "bdev_nvme_attach_controller" 00:23:48.322 } 00:23:48.322 EOF 00:23:48.322 )") 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.322 { 00:23:48.322 "params": { 00:23:48.322 "name": "Nvme$subsystem", 00:23:48.322 "trtype": "$TEST_TRANSPORT", 00:23:48.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.322 "adrfam": "ipv4", 00:23:48.322 "trsvcid": "$NVMF_PORT", 00:23:48.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.322 "hdgst": ${hdgst:-false}, 00:23:48.322 "ddgst": ${ddgst:-false} 00:23:48.322 }, 00:23:48.322 "method": "bdev_nvme_attach_controller" 00:23:48.322 } 00:23:48.322 EOF 00:23:48.322 )") 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.322 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.322 { 00:23:48.322 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 [2024-12-10 05:49:06.112977] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:48.323 [2024-12-10 05:49:06.113029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197437 ] 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.323 { 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme$subsystem", 00:23:48.323 "trtype": "$TEST_TRANSPORT", 00:23:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "$NVMF_PORT", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.323 "hdgst": ${hdgst:-false}, 00:23:48.323 "ddgst": ${ddgst:-false} 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 } 00:23:48.323 EOF 00:23:48.323 )") 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:48.323 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme1", 00:23:48.323 "trtype": "tcp", 00:23:48.323 "traddr": "10.0.0.2", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "4420", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.323 "hdgst": false, 00:23:48.323 "ddgst": false 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 },{ 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme2", 00:23:48.323 "trtype": "tcp", 00:23:48.323 "traddr": "10.0.0.2", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "4420", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.323 "hdgst": false, 00:23:48.323 "ddgst": false 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 },{ 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme3", 00:23:48.323 "trtype": "tcp", 00:23:48.323 "traddr": "10.0.0.2", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "4420", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:48.323 "hdgst": false, 00:23:48.323 "ddgst": false 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 },{ 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme4", 00:23:48.323 "trtype": "tcp", 00:23:48.323 "traddr": "10.0.0.2", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "4420", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:48.323 "hdgst": false, 00:23:48.323 "ddgst": false 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 },{ 00:23:48.323 "params": { 00:23:48.323 "name": "Nvme5", 00:23:48.323 "trtype": "tcp", 00:23:48.323 "traddr": "10.0.0.2", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "4420", 00:23:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:48.323 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:48.323 "hdgst": false, 00:23:48.323 "ddgst": false 00:23:48.323 }, 00:23:48.323 "method": "bdev_nvme_attach_controller" 00:23:48.323 },{ 00:23:48.323 "params": { 00:23:48.324 "name": "Nvme6", 00:23:48.324 "trtype": "tcp", 00:23:48.324 "traddr": "10.0.0.2", 00:23:48.324 "adrfam": "ipv4", 00:23:48.324 "trsvcid": "4420", 00:23:48.324 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:48.324 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:48.324 "hdgst": false, 00:23:48.324 "ddgst": false 00:23:48.324 }, 00:23:48.324 "method": "bdev_nvme_attach_controller" 00:23:48.324 },{ 00:23:48.324 "params": { 00:23:48.324 "name": "Nvme7", 00:23:48.324 "trtype": "tcp", 00:23:48.324 "traddr": "10.0.0.2", 00:23:48.324 "adrfam": "ipv4", 00:23:48.324 "trsvcid": "4420", 00:23:48.324 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:48.324 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:48.324 "hdgst": false, 00:23:48.324 "ddgst": false 00:23:48.324 }, 00:23:48.324 "method": "bdev_nvme_attach_controller" 00:23:48.324 },{ 00:23:48.324 "params": { 00:23:48.324 "name": "Nvme8", 00:23:48.324 "trtype": "tcp", 00:23:48.324 "traddr": "10.0.0.2", 00:23:48.324 "adrfam": "ipv4", 00:23:48.324 "trsvcid": "4420", 00:23:48.324 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:48.324 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:48.324 "hdgst": false, 00:23:48.324 "ddgst": false 00:23:48.324 }, 00:23:48.324 "method": "bdev_nvme_attach_controller" 00:23:48.324 },{ 00:23:48.324 "params": { 00:23:48.324 "name": "Nvme9", 00:23:48.324 "trtype": "tcp", 00:23:48.324 "traddr": "10.0.0.2", 00:23:48.324 "adrfam": "ipv4", 00:23:48.324 "trsvcid": "4420", 00:23:48.324 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:48.324 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:48.324 "hdgst": false, 00:23:48.324 "ddgst": false 00:23:48.324 }, 00:23:48.324 "method": "bdev_nvme_attach_controller" 00:23:48.324 },{ 00:23:48.324 "params": { 00:23:48.324 "name": "Nvme10", 00:23:48.324 "trtype": "tcp", 00:23:48.324 "traddr": "10.0.0.2", 00:23:48.324 "adrfam": "ipv4", 00:23:48.324 "trsvcid": "4420", 00:23:48.324 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:48.324 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:48.324 "hdgst": false, 00:23:48.324 "ddgst": false 00:23:48.324 }, 00:23:48.324 "method": "bdev_nvme_attach_controller" 00:23:48.324 }' 00:23:48.324 [2024-12-10 05:49:06.195460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.324 [2024-12-10 05:49:06.235120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.695 Running I/O for 10 seconds... 00:23:49.695 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.695 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:49.695 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:49.695 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.695 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:49.952 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:50.210 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 197437 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 197437 ']' 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 197437 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.467 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197437 00:23:50.724 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.724 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.724 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197437' 00:23:50.724 killing process with pid 197437 00:23:50.724 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 197437 00:23:50.724 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 197437 00:23:50.724 Received shutdown signal, test time was about 0.920417 seconds 00:23:50.724 00:23:50.724 Latency(us) 00:23:50.724 [2024-12-10T04:49:08.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.724 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.724 Verification LBA range: start 0x0 length 0x400 00:23:50.724 Nvme1n1 : 0.91 282.07 17.63 0.00 0.00 224493.23 16976.94 212711.13 00:23:50.724 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.724 Verification LBA range: start 0x0 length 0x400 00:23:50.724 Nvme2n1 : 0.90 284.46 17.78 0.00 0.00 218604.50 17101.78 215707.06 00:23:50.724 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.724 Verification LBA range: start 0x0 length 0x400 00:23:50.724 Nvme3n1 : 0.89 298.23 18.64 0.00 0.00 202749.10 6210.32 201726.05 00:23:50.724 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.724 Verification LBA range: start 0x0 length 0x400 00:23:50.724 Nvme4n1 : 0.89 305.67 19.10 0.00 0.00 193485.47 6740.85 209715.20 00:23:50.725 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.725 Verification LBA range: start 0x0 length 0x400 00:23:50.725 Nvme5n1 : 0.92 279.32 17.46 0.00 0.00 211147.09 17850.76 216705.71 00:23:50.725 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.725 Verification LBA range: start 0x0 length 0x400 00:23:50.725 Nvme6n1 : 0.91 280.37 17.52 0.00 0.00 205879.10 16477.62 209715.20 00:23:50.725 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.725 Verification LBA range: start 0x0 length 0x400 00:23:50.725 Nvme7n1 : 0.90 287.63 17.98 0.00 0.00 196853.16 2090.91 198730.12 00:23:50.725 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.725 Verification LBA range: start 0x0 length 0x400 00:23:50.725 Nvme8n1 : 0.91 284.52 17.78 0.00 0.00 195153.72 3308.01 216705.71 00:23:50.725 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.725 Verification LBA range: start 0x0 length 0x400 00:23:50.725 Nvme9n1 : 0.92 278.33 17.40 0.00 0.00 196577.52 16477.62 228689.43 00:23:50.725 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.725 Verification LBA range: start 0x0 length 0x400 00:23:50.725 Nvme10n1 : 0.89 224.47 14.03 0.00 0.00 235579.14 3994.58 221698.93 00:23:50.725 [2024-12-10T04:49:08.684Z] =================================================================================================================== 00:23:50.725 [2024-12-10T04:49:08.684Z] Total : 2805.08 175.32 0.00 0.00 207288.69 2090.91 228689.43 00:23:50.981 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 197157 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.912 rmmod nvme_tcp 00:23:51.912 rmmod nvme_fabrics 00:23:51.912 rmmod nvme_keyring 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:51.912 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 197157 ']' 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 197157 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 197157 ']' 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 197157 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197157 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197157' 00:23:51.913 killing process with pid 197157 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 197157 00:23:51.913 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 197157 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.479 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.382 00:23:54.382 real 0m7.992s 00:23:54.382 user 0m24.216s 00:23:54.382 sys 0m1.426s 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:54.382 ************************************ 00:23:54.382 END TEST nvmf_shutdown_tc2 00:23:54.382 ************************************ 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.382 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:54.641 ************************************ 00:23:54.641 START TEST nvmf_shutdown_tc3 00:23:54.641 ************************************ 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.641 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:54.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:54.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:54.642 Found net devices under 0000:af:00.0: cvl_0_0 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:54.642 Found net devices under 0000:af:00.1: cvl_0_1 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:54.642 00:23:54.642 --- 10.0.0.2 ping statistics --- 00:23:54.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.642 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:54.642 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:54.901 00:23:54.901 --- 10.0.0.1 ping statistics --- 00:23:54.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.901 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=198562 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 198562 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 198562 ']' 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.901 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.901 [2024-12-10 05:49:12.696086] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:54.901 [2024-12-10 05:49:12.696129] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.901 [2024-12-10 05:49:12.777385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.901 [2024-12-10 05:49:12.817775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.901 [2024-12-10 05:49:12.817812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.901 [2024-12-10 05:49:12.817819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.901 [2024-12-10 05:49:12.817824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.901 [2024-12-10 05:49:12.817829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.901 [2024-12-10 05:49:12.819393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.901 [2024-12-10 05:49:12.819504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.901 [2024-12-10 05:49:12.819609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.901 [2024-12-10 05:49:12.819610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.832 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.833 [2024-12-10 05:49:13.571562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.833 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.833 Malloc1 00:23:55.833 [2024-12-10 05:49:13.688451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.833 Malloc2 00:23:55.833 Malloc3 00:23:56.090 Malloc4 00:23:56.090 Malloc5 00:23:56.090 Malloc6 00:23:56.090 Malloc7 00:23:56.090 Malloc8 00:23:56.090 Malloc9 00:23:56.348 Malloc10 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=198870 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 198870 /var/tmp/bdevperf.sock 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 198870 ']' 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.348 { 00:23:56.348 "params": { 00:23:56.348 "name": "Nvme$subsystem", 00:23:56.348 "trtype": "$TEST_TRANSPORT", 00:23:56.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.348 "adrfam": "ipv4", 00:23:56.348 "trsvcid": "$NVMF_PORT", 00:23:56.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.348 "hdgst": ${hdgst:-false}, 00:23:56.348 "ddgst": ${ddgst:-false} 00:23:56.348 }, 00:23:56.348 "method": "bdev_nvme_attach_controller" 00:23:56.348 } 00:23:56.348 EOF 00:23:56.348 )") 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.348 { 00:23:56.348 "params": { 00:23:56.348 "name": "Nvme$subsystem", 00:23:56.348 "trtype": "$TEST_TRANSPORT", 00:23:56.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.348 "adrfam": "ipv4", 00:23:56.348 "trsvcid": "$NVMF_PORT", 00:23:56.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.348 "hdgst": ${hdgst:-false}, 00:23:56.348 "ddgst": ${ddgst:-false} 00:23:56.348 }, 00:23:56.348 "method": "bdev_nvme_attach_controller" 00:23:56.348 } 00:23:56.348 EOF 00:23:56.348 )") 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.348 { 00:23:56.348 "params": { 00:23:56.348 "name": "Nvme$subsystem", 00:23:56.348 "trtype": "$TEST_TRANSPORT", 00:23:56.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.348 "adrfam": "ipv4", 00:23:56.348 "trsvcid": "$NVMF_PORT", 00:23:56.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.348 "hdgst": ${hdgst:-false}, 00:23:56.348 "ddgst": ${ddgst:-false} 00:23:56.348 }, 00:23:56.348 "method": "bdev_nvme_attach_controller" 00:23:56.348 } 00:23:56.348 EOF 00:23:56.348 )") 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.348 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.348 { 00:23:56.348 "params": { 00:23:56.348 "name": "Nvme$subsystem", 00:23:56.348 "trtype": "$TEST_TRANSPORT", 00:23:56.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.348 "adrfam": "ipv4", 00:23:56.348 "trsvcid": "$NVMF_PORT", 00:23:56.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.348 "hdgst": ${hdgst:-false}, 00:23:56.348 "ddgst": ${ddgst:-false} 00:23:56.348 }, 00:23:56.348 "method": "bdev_nvme_attach_controller" 00:23:56.348 } 00:23:56.348 EOF 00:23:56.348 )") 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.349 { 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme$subsystem", 00:23:56.349 "trtype": "$TEST_TRANSPORT", 00:23:56.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "$NVMF_PORT", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.349 "hdgst": ${hdgst:-false}, 00:23:56.349 "ddgst": ${ddgst:-false} 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 } 00:23:56.349 EOF 00:23:56.349 )") 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.349 { 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme$subsystem", 00:23:56.349 "trtype": "$TEST_TRANSPORT", 00:23:56.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "$NVMF_PORT", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.349 "hdgst": ${hdgst:-false}, 00:23:56.349 "ddgst": ${ddgst:-false} 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 } 00:23:56.349 EOF 00:23:56.349 )") 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.349 { 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme$subsystem", 00:23:56.349 "trtype": "$TEST_TRANSPORT", 00:23:56.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "$NVMF_PORT", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.349 "hdgst": ${hdgst:-false}, 00:23:56.349 "ddgst": ${ddgst:-false} 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 } 00:23:56.349 EOF 00:23:56.349 )") 00:23:56.349 [2024-12-10 05:49:14.161706] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:23:56.349 [2024-12-10 05:49:14.161759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198870 ] 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.349 { 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme$subsystem", 00:23:56.349 "trtype": "$TEST_TRANSPORT", 00:23:56.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "$NVMF_PORT", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.349 "hdgst": ${hdgst:-false}, 00:23:56.349 "ddgst": ${ddgst:-false} 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 } 00:23:56.349 EOF 00:23:56.349 )") 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.349 { 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme$subsystem", 00:23:56.349 "trtype": "$TEST_TRANSPORT", 00:23:56.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "$NVMF_PORT", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.349 "hdgst": ${hdgst:-false}, 00:23:56.349 "ddgst": ${ddgst:-false} 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 } 00:23:56.349 EOF 00:23:56.349 )") 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.349 { 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme$subsystem", 00:23:56.349 "trtype": "$TEST_TRANSPORT", 00:23:56.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "$NVMF_PORT", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.349 "hdgst": ${hdgst:-false}, 00:23:56.349 "ddgst": ${ddgst:-false} 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 } 00:23:56.349 EOF 00:23:56.349 )") 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:56.349 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme1", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme2", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme3", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme4", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme5", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme6", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme7", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme8", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.349 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:56.349 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:56.349 "hdgst": false, 00:23:56.349 "ddgst": false 00:23:56.349 }, 00:23:56.349 "method": "bdev_nvme_attach_controller" 00:23:56.349 },{ 00:23:56.349 "params": { 00:23:56.349 "name": "Nvme9", 00:23:56.349 "trtype": "tcp", 00:23:56.349 "traddr": "10.0.0.2", 00:23:56.349 "adrfam": "ipv4", 00:23:56.349 "trsvcid": "4420", 00:23:56.350 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:56.350 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:56.350 "hdgst": false, 00:23:56.350 "ddgst": false 00:23:56.350 }, 00:23:56.350 "method": "bdev_nvme_attach_controller" 00:23:56.350 },{ 00:23:56.350 "params": { 00:23:56.350 "name": "Nvme10", 00:23:56.350 "trtype": "tcp", 00:23:56.350 "traddr": "10.0.0.2", 00:23:56.350 "adrfam": "ipv4", 00:23:56.350 "trsvcid": "4420", 00:23:56.350 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:56.350 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:56.350 "hdgst": false, 00:23:56.350 "ddgst": false 00:23:56.350 }, 00:23:56.350 "method": "bdev_nvme_attach_controller" 00:23:56.350 }' 00:23:56.350 [2024-12-10 05:49:14.243613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.350 [2024-12-10 05:49:14.283904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.242 Running I/O for 10 seconds... 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.242 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.499 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.499 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:58.499 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:58.499 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:58.756 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 198562 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 198562 ']' 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 198562 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 198562 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 198562' 00:23:59.028 killing process with pid 198562 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 198562 00:23:59.028 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 198562 00:23:59.028 [2024-12-10 05:49:16.893352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.028 [2024-12-10 05:49:16.893930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.893939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.893949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.893958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.893968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7af6d0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with t[2024-12-10 05:49:16.896589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:59.029 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 05:49:16.896640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 he state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with t[2024-12-10 05:49:16.896700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1he state(6) to be set 00:23:59.029 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with t[2024-12-10 05:49:16.896709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:59.029 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with t[2024-12-10 05:49:16.896788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1he state(6) to be set 00:23:59.029 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.029 [2024-12-10 05:49:16.896802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.029 [2024-12-10 05:49:16.896809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.029 [2024-12-10 05:49:16.896813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with t[2024-12-10 05:49:16.896824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1he state(6) to be set 00:23:59.030 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with the state(6) to be set 00:23:59.030 [2024-12-10 05:49:16.896876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afba0 is same with t[2024-12-10 05:49:16.896875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1he state(6) to be set 00:23:59.030 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.896986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.896993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.030 [2024-12-10 05:49:16.897360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.030 [2024-12-10 05:49:16.897367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.897565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.031 [2024-12-10 05:49:16.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.031 [2024-12-10 05:49:16.898705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.898994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.031 [2024-12-10 05:49:16.899086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.899091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.899097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.899104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.899110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.899115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.899121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0070 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.900508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0560 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.901199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.901229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.901241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.901251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.901261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.032 [2024-12-10 05:49:16.901271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.901815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0a30 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.033 [2024-12-10 05:49:16.902878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.902994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.903077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f20 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13f0 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.034 [2024-12-10 05:49:16.904925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.904998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1770 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.905657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1c40 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.909671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:59.035 [2024-12-10 05:49:16.909735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19be450 (9): Bad file descriptor 00:23:59.035 [2024-12-10 05:49:16.909764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fb60 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.909848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a610 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.909930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.909977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.909983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19df430 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.910007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15657d0 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.910085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1559810 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.910160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19906e0 is same with the state(6) to be set 00:23:59.035 [2024-12-10 05:49:16.910247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.035 [2024-12-10 05:49:16.910268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.035 [2024-12-10 05:49:16.910275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155aa60 is same with the state(6) to be set 00:23:59.036 [2024-12-10 05:49:16.910327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561b90 is same with the state(6) to be set 00:23:59.036 [2024-12-10 05:49:16.910401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.036 [2024-12-10 05:49:16.910449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15622c0 is same with the state(6) to be set 00:23:59.036 [2024-12-10 05:49:16.910811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.910989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.910996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.036 [2024-12-10 05:49:16.911182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.036 [2024-12-10 05:49:16.911188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.037 [2024-12-10 05:49:16.911955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.037 [2024-12-10 05:49:16.911966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.911975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.911983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.911990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.911998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.038 [2024-12-10 05:49:16.912559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.038 [2024-12-10 05:49:16.912567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.912878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.039 [2024-12-10 05:49:16.912884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.039 [2024-12-10 05:49:16.915296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:59.039 [2024-12-10 05:49:16.915337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a610 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.915526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.039 [2024-12-10 05:49:16.915540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19be450 with addr=10.0.0.2, port=4420 00:23:59.039 [2024-12-10 05:49:16.915548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19be450 is same with the state(6) to be set 00:23:59.039 [2024-12-10 05:49:16.915606] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.915910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:59.039 [2024-12-10 05:49:16.915932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fb60 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.915950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19be450 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.916008] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.916455] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.916902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.039 [2024-12-10 05:49:16.916917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147a610 with addr=10.0.0.2, port=4420 00:23:59.039 [2024-12-10 05:49:16.916925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a610 is same with the state(6) to be set 00:23:59.039 [2024-12-10 05:49:16.916944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:59.039 [2024-12-10 05:49:16.916951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:59.039 [2024-12-10 05:49:16.916961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:59.039 [2024-12-10 05:49:16.916969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:59.039 [2024-12-10 05:49:16.917030] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.917076] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.917118] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.917160] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.039 [2024-12-10 05:49:16.917336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.039 [2024-12-10 05:49:16.917350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198fb60 with addr=10.0.0.2, port=4420 00:23:59.039 [2024-12-10 05:49:16.917358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fb60 is same with the state(6) to be set 00:23:59.039 [2024-12-10 05:49:16.917368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a610 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.917438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fb60 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.917448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:59.039 [2024-12-10 05:49:16.917455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:59.039 [2024-12-10 05:49:16.917462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:59.039 [2024-12-10 05:49:16.917469] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:59.039 [2024-12-10 05:49:16.917503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:59.039 [2024-12-10 05:49:16.917513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:59.039 [2024-12-10 05:49:16.917520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:59.039 [2024-12-10 05:49:16.917525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:59.039 [2024-12-10 05:49:16.919718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19df430 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.919741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15657d0 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.919755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1559810 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.919769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19906e0 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.919784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155aa60 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.919798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561b90 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.919812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15622c0 (9): Bad file descriptor 00:23:59.039 [2024-12-10 05:49:16.923079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:59.039 [2024-12-10 05:49:16.923292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.040 [2024-12-10 05:49:16.923308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19be450 with addr=10.0.0.2, port=4420 00:23:59.040 [2024-12-10 05:49:16.923316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19be450 is same with the state(6) to be set 00:23:59.040 [2024-12-10 05:49:16.923349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19be450 (9): Bad file descriptor 00:23:59.040 [2024-12-10 05:49:16.923388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:59.040 [2024-12-10 05:49:16.923396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:59.040 [2024-12-10 05:49:16.923404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:59.040 [2024-12-10 05:49:16.923411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:59.040 [2024-12-10 05:49:16.926038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:59.040 [2024-12-10 05:49:16.926344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.040 [2024-12-10 05:49:16.926359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147a610 with addr=10.0.0.2, port=4420 00:23:59.040 [2024-12-10 05:49:16.926366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a610 is same with the state(6) to be set 00:23:59.040 [2024-12-10 05:49:16.926402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a610 (9): Bad file descriptor 00:23:59.040 [2024-12-10 05:49:16.926436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:59.040 [2024-12-10 05:49:16.926443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:59.040 [2024-12-10 05:49:16.926449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:59.040 [2024-12-10 05:49:16.926458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:59.040 [2024-12-10 05:49:16.927021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:59.040 [2024-12-10 05:49:16.927239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.040 [2024-12-10 05:49:16.927254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198fb60 with addr=10.0.0.2, port=4420 00:23:59.040 [2024-12-10 05:49:16.927262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fb60 is same with the state(6) to be set 00:23:59.040 [2024-12-10 05:49:16.927293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fb60 (9): Bad file descriptor 00:23:59.040 [2024-12-10 05:49:16.927325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:59.040 [2024-12-10 05:49:16.927332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:59.040 [2024-12-10 05:49:16.927339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:59.040 [2024-12-10 05:49:16.927345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:59.040 [2024-12-10 05:49:16.929864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.929989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.929995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.040 [2024-12-10 05:49:16.930323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.040 [2024-12-10 05:49:16.930330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.041 [2024-12-10 05:49:16.930790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.041 [2024-12-10 05:49:16.930796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.930804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.930811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.930819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.930825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.930832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91810 is same with the state(6) to be set 00:23:59.042 [2024-12-10 05:49:16.931853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.931991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.931997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.042 [2024-12-10 05:49:16.932423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.042 [2024-12-10 05:49:16.932430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.932804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.932810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a92920 is same with the state(6) to be set 00:23:59.043 [2024-12-10 05:49:16.933790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.933990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.933996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.043 [2024-12-10 05:49:16.934004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.043 [2024-12-10 05:49:16.934010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.044 [2024-12-10 05:49:16.934611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.044 [2024-12-10 05:49:16.934619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.934756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.934763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93b90 is same with the state(6) to be set 00:23:59.045 [2024-12-10 05:49:16.935757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.935990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.935996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.045 [2024-12-10 05:49:16.936142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.045 [2024-12-10 05:49:16.936150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.936703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.936711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a94e00 is same with the state(6) to be set 00:23:59.046 [2024-12-10 05:49:16.937686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.937699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.937710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.046 [2024-12-10 05:49:16.937720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.046 [2024-12-10 05:49:16.937728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.937990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.937997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.047 [2024-12-10 05:49:16.938326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.047 [2024-12-10 05:49:16.938334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.938633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.938642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb40 is same with the state(6) to be set 00:23:59.048 [2024-12-10 05:49:16.939629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.048 [2024-12-10 05:49:16.939908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.048 [2024-12-10 05:49:16.939917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.939924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.939931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.939938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.939946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.939952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.939960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.939966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.939974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.939981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.939988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.939994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.049 [2024-12-10 05:49:16.940438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.049 [2024-12-10 05:49:16.940445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.940453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.945131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.945138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419480 is same with the state(6) to be set 00:23:59.050 [2024-12-10 05:49:16.946134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.050 [2024-12-10 05:49:16.946599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.050 [2024-12-10 05:49:16.946606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.946991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.946999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.947005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.947013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.947019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.947027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.947036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.947044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.947050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.947058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.947064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.947072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.051 [2024-12-10 05:49:16.947078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.051 [2024-12-10 05:49:16.947085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b4880 is same with the state(6) to be set 00:23:59.051 [2024-12-10 05:49:16.948038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:59.051 [2024-12-10 05:49:16.948056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:59.051 [2024-12-10 05:49:16.948066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:59.051 [2024-12-10 05:49:16.948077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:59.051 [2024-12-10 05:49:16.948158] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:59.051 [2024-12-10 05:49:16.948173] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:59.051 [2024-12-10 05:49:16.948183] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:59.051 [2024-12-10 05:49:16.948466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:59.051 [2024-12-10 05:49:16.948483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:59.051 task offset: 25728 on job bdev=Nvme10n1 fails 00:23:59.051 00:23:59.051 Latency(us) 00:23:59.051 [2024-12-10T04:49:17.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.051 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.051 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:59.051 Verification LBA range: start 0x0 length 0x400 00:23:59.051 Nvme1n1 : 0.95 201.72 12.61 67.24 0.00 235544.62 20222.54 213709.78 00:23:59.051 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.051 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:59.051 Verification LBA range: start 0x0 length 0x400 00:23:59.051 Nvme2n1 : 0.95 201.30 12.58 67.10 0.00 232162.99 16477.62 212711.13 00:23:59.051 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.051 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:59.051 Verification LBA range: start 0x0 length 0x400 00:23:59.051 Nvme3n1 : 0.96 205.08 12.82 66.96 0.00 225249.27 15541.39 195734.19 00:23:59.052 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme4n1 : 0.96 275.67 17.23 66.83 0.00 175875.20 15666.22 208716.56 00:23:59.052 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme5n1 : 0.96 200.08 12.51 66.69 0.00 222045.50 15166.90 233682.65 00:23:59.052 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme6n1 ended in about 0.97 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme6n1 : 0.97 198.74 12.42 66.25 0.00 219849.63 26588.89 222697.57 00:23:59.052 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme7n1 ended in about 0.93 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme7n1 : 0.93 274.01 17.13 68.50 0.00 166369.72 4993.22 203723.34 00:23:59.052 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme8n1 ended in about 0.97 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme8n1 : 0.97 198.34 12.40 66.11 0.00 212598.25 15229.32 181753.17 00:23:59.052 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme9n1 ended in about 0.94 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme9n1 : 0.94 205.31 12.83 68.44 0.00 200631.47 5617.37 218702.99 00:23:59.052 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.052 Job: Nvme10n1 ended in about 0.93 seconds with error 00:23:59.052 Verification LBA range: start 0x0 length 0x400 00:23:59.052 Nvme10n1 : 0.93 206.50 12.91 68.83 0.00 195462.22 13918.60 235679.94 00:23:59.052 [2024-12-10T04:49:17.011Z] =================================================================================================================== 00:23:59.052 [2024-12-10T04:49:17.011Z] Total : 2166.76 135.42 672.96 0.00 206730.97 4993.22 235679.94 00:23:59.355 [2024-12-10 05:49:16.980738] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:59.355 [2024-12-10 05:49:16.980790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:59.355 [2024-12-10 05:49:16.981132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.981151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15657d0 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.981162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15657d0 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.981313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.981324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559810 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.981331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1559810 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.981480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.981490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561b90 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.981498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561b90 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.981571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.981582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15622c0 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.981589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15622c0 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.983135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:59.355 [2024-12-10 05:49:16.983153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:59.355 [2024-12-10 05:49:16.983395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.983415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19906e0 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.983424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19906e0 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.983553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.983563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155aa60 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.983571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155aa60 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.983710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.355 [2024-12-10 05:49:16.983720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19df430 with addr=10.0.0.2, port=4420 00:23:59.355 [2024-12-10 05:49:16.983728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19df430 is same with the state(6) to be set 00:23:59.355 [2024-12-10 05:49:16.983741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15657d0 (9): Bad file descriptor 00:23:59.355 [2024-12-10 05:49:16.983753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1559810 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.983761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561b90 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.983770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15622c0 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.983802] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:59.356 [2024-12-10 05:49:16.983817] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:59.356 [2024-12-10 05:49:16.983827] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:59.356 [2024-12-10 05:49:16.983836] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:59.356 [2024-12-10 05:49:16.983846] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:59.356 [2024-12-10 05:49:16.983925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:59.356 [2024-12-10 05:49:16.984168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.356 [2024-12-10 05:49:16.984181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19be450 with addr=10.0.0.2, port=4420 00:23:59.356 [2024-12-10 05:49:16.984189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19be450 is same with the state(6) to be set 00:23:59.356 [2024-12-10 05:49:16.984358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.356 [2024-12-10 05:49:16.984370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147a610 with addr=10.0.0.2, port=4420 00:23:59.356 [2024-12-10 05:49:16.984377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a610 is same with the state(6) to be set 00:23:59.356 [2024-12-10 05:49:16.984386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19906e0 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.984396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155aa60 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.984406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19df430 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.984414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984515] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.356 [2024-12-10 05:49:16.984692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198fb60 with addr=10.0.0.2, port=4420 00:23:59.356 [2024-12-10 05:49:16.984699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fb60 is same with the state(6) to be set 00:23:59.356 [2024-12-10 05:49:16.984706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19be450 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.984715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a610 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.984723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984791] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fb60 (9): Bad file descriptor 00:23:59.356 [2024-12-10 05:49:16.984823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:59.356 [2024-12-10 05:49:16.984888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:59.356 [2024-12-10 05:49:16.984895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:59.356 [2024-12-10 05:49:16.984901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:59.356 [2024-12-10 05:49:16.984907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:59.691 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 198870 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 198870 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 198870 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.628 rmmod nvme_tcp 00:24:00.628 rmmod nvme_fabrics 00:24:00.628 rmmod nvme_keyring 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 198562 ']' 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 198562 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 198562 ']' 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 198562 00:24:00.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (198562) - No such process 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 198562 is not found' 00:24:00.628 Process with pid 198562 is not found 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:00.628 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.629 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.629 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.629 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:00.629 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.629 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.629 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:02.532 00:24:02.532 real 0m8.097s 00:24:02.532 user 0m20.835s 00:24:02.532 sys 0m1.368s 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:02.532 ************************************ 00:24:02.532 END TEST nvmf_shutdown_tc3 00:24:02.532 ************************************ 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.532 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:02.791 ************************************ 00:24:02.791 START TEST nvmf_shutdown_tc4 00:24:02.791 ************************************ 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.791 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:02.792 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:02.792 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:02.792 Found net devices under 0000:af:00.0: cvl_0_0 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:02.792 Found net devices under 0000:af:00.1: cvl_0_1 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.792 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:24:03.051 00:24:03.051 --- 10.0.0.2 ping statistics --- 00:24:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.051 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:24:03.051 00:24:03.051 --- 10.0.0.1 ping statistics --- 00:24:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.051 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=200017 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 200017 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 200017 ']' 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.051 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.051 [2024-12-10 05:49:20.892474] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:03.051 [2024-12-10 05:49:20.892524] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.051 [2024-12-10 05:49:20.980096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.308 [2024-12-10 05:49:21.022714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.308 [2024-12-10 05:49:21.022749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.308 [2024-12-10 05:49:21.022756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.308 [2024-12-10 05:49:21.022762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.308 [2024-12-10 05:49:21.022767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.308 [2024-12-10 05:49:21.024282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.309 [2024-12-10 05:49:21.024392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.309 [2024-12-10 05:49:21.024497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.309 [2024-12-10 05:49:21.024497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.872 [2024-12-10 05:49:21.771868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.872 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.129 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:04.129 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.129 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.129 Malloc1 00:24:04.129 [2024-12-10 05:49:21.885115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.129 Malloc2 00:24:04.129 Malloc3 00:24:04.129 Malloc4 00:24:04.129 Malloc5 00:24:04.129 Malloc6 00:24:04.386 Malloc7 00:24:04.386 Malloc8 00:24:04.386 Malloc9 00:24:04.386 Malloc10 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=200292 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:04.386 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:04.642 [2024-12-10 05:49:22.391908] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 200017 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 200017 ']' 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 200017 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200017 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.907 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200017' 00:24:09.907 killing process with pid 200017 00:24:09.908 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 200017 00:24:09.908 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 200017 00:24:09.908 [2024-12-10 05:49:27.390387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76440 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.390432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76440 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.390444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76440 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.390459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76440 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.390467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76440 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.390475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76440 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.391557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76930 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.392509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76e20 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.393413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75f70 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.393437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75f70 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.393445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75f70 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.393452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75f70 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.393458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75f70 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.393465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75f70 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 [2024-12-10 05:49:27.395367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 [2024-12-10 05:49:27.395387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 starting I/O failed: -6 00:24:09.908 [2024-12-10 05:49:27.395394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.395402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 [2024-12-10 05:49:27.395408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.395414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 [2024-12-10 05:49:27.395420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.395426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77cd0 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 [2024-12-10 05:49:27.395646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 [2024-12-10 05:49:27.395929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f781a0 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.395948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f781a0 is same with Write completed with error (sct=0, sc=8) 00:24:09.908 the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.395956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f781a0 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 [2024-12-10 05:49:27.395966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f781a0 is same with the state(6) to be set 00:24:09.908 starting I/O failed: -6 00:24:09.908 [2024-12-10 05:49:27.395973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f781a0 is same with the state(6) to be set 00:24:09.908 [2024-12-10 05:49:27.395979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f781a0 is same with the state(6) to be set 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.908 starting I/O failed: -6 00:24:09.908 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.396449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with Write completed with error (sct=0, sc=8) 00:24:09.909 the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with Write completed with error (sct=0, sc=8) 00:24:09.909 the state(6) to be set 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.396480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.396493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.396506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.396513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with Write completed with error (sct=0, sc=8) 00:24:09.909 the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with Write completed with error (sct=0, sc=8) 00:24:09.909 the state(6) to be set 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.396550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.396568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.396575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77310 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.396610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f79ea0 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.397048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f79ea0 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f79ea0 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f79ea0 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f79ea0 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with Write completed with error (sct=0, sc=8) 00:24:09.909 the state(6) to be set 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.397541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.397559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with Write completed with error (sct=0, sc=8) 00:24:09.909 the state(6) to be set 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5a10 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 [2024-12-10 05:49:27.397620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 starting I/O failed: -6 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.909 [2024-12-10 05:49:27.397847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with starting I/O failed: -6 00:24:09.909 the state(6) to be set 00:24:09.909 [2024-12-10 05:49:27.397867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.909 Write completed with error (sct=0, sc=8) 00:24:09.910 [2024-12-10 05:49:27.397874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.397880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.397886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.397892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with Write completed with error (sct=0, sc=8) 00:24:09.910 the state(6) to be set 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.397902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.397909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.397914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.398174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.398188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 [2024-12-10 05:49:27.398195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.398201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.398207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 [2024-12-10 05:49:27.398213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.398229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.398235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f799d0 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.399213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.910 NVMe io qpair process completion error 00:24:09.910 [2024-12-10 05:49:27.403184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8dd0 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.403207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8dd0 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.403496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e92a0 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 [2024-12-10 05:49:27.403841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 starting I/O failed: -6 00:24:09.910 [2024-12-10 05:49:27.403862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 [2024-12-10 05:49:27.403870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.403877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 [2024-12-10 05:49:27.403883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.403890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.403896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with Write completed with error (sct=0, sc=8) 00:24:09.910 the state(6) to be set 00:24:09.910 [2024-12-10 05:49:27.403903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8900 is same with the state(6) to be set 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 starting I/O failed: -6 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 Write completed with error (sct=0, sc=8) 00:24:09.910 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.405501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.406272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.406853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7a90 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.406873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7a90 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 [2024-12-10 05:49:27.406883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7a90 is same with the state(6) to be set 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.406894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7a90 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 [2024-12-10 05:49:27.406904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7a90 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.407183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7f60 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 [2024-12-10 05:49:27.407199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7f60 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 [2024-12-10 05:49:27.407206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7f60 is same with the state(6) to be set 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.407214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7f60 is same with the state(6) to be set 00:24:09.911 [2024-12-10 05:49:27.407228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7f60 is same with the state(6) to be set 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 [2024-12-10 05:49:27.407234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7f60 is same with the state(6) to be set 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 [2024-12-10 05:49:27.407315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.911 Write completed with error (sct=0, sc=8) 00:24:09.911 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.407771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.407790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 [2024-12-10 05:49:27.407798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 [2024-12-10 05:49:27.407805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.407811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 [2024-12-10 05:49:27.407817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 [2024-12-10 05:49:27.407824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.407830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8430 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.408208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75c0 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.408233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75c0 is same with the state(6) to be set 00:24:09.912 [2024-12-10 05:49:27.408240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75c0 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 [2024-12-10 05:49:27.408246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75c0 is same with the state(6) to be set 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.408252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75c0 is same with the state(6) to be set 00:24:09.912 [2024-12-10 05:49:27.408258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75c0 is same with the state(6) to be set 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 [2024-12-10 05:49:27.409002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.912 NVMe io qpair process completion error 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 starting I/O failed: -6 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.912 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 [2024-12-10 05:49:27.409923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.913 starting I/O failed: -6 00:24:09.913 starting I/O failed: -6 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 [2024-12-10 05:49:27.410824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 [2024-12-10 05:49:27.411863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.913 starting I/O failed: -6 00:24:09.913 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 [2024-12-10 05:49:27.413429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.914 NVMe io qpair process completion error 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 [2024-12-10 05:49:27.414395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.914 starting I/O failed: -6 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 [2024-12-10 05:49:27.415304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.914 Write completed with error (sct=0, sc=8) 00:24:09.914 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 [2024-12-10 05:49:27.416330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 [2024-12-10 05:49:27.418486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.915 NVMe io qpair process completion error 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 [2024-12-10 05:49:27.419487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.915 starting I/O failed: -6 00:24:09.915 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 [2024-12-10 05:49:27.420381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.916 starting I/O failed: -6 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 [2024-12-10 05:49:27.421421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.916 Write completed with error (sct=0, sc=8) 00:24:09.916 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 [2024-12-10 05:49:27.424865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.917 NVMe io qpair process completion error 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 [2024-12-10 05:49:27.425925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 [2024-12-10 05:49:27.426742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.917 starting I/O failed: -6 00:24:09.917 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 [2024-12-10 05:49:27.427734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 [2024-12-10 05:49:27.430755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.918 NVMe io qpair process completion error 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 [2024-12-10 05:49:27.431742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.918 starting I/O failed: -6 00:24:09.918 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 [2024-12-10 05:49:27.432642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 [2024-12-10 05:49:27.433621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.919 Write completed with error (sct=0, sc=8) 00:24:09.919 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 [2024-12-10 05:49:27.435422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.920 NVMe io qpair process completion error 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 [2024-12-10 05:49:27.436564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 [2024-12-10 05:49:27.437461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 [2024-12-10 05:49:27.438486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.920 Write completed with error (sct=0, sc=8) 00:24:09.920 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 [2024-12-10 05:49:27.444164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.921 NVMe io qpair process completion error 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 [2024-12-10 05:49:27.445186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 starting I/O failed: -6 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.921 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 [2024-12-10 05:49:27.446103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 [2024-12-10 05:49:27.447093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.922 starting I/O failed: -6 00:24:09.922 Write completed with error (sct=0, sc=8) 00:24:09.923 starting I/O failed: -6 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 starting I/O failed: -6 00:24:09.923 [2024-12-10 05:49:27.451113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.923 NVMe io qpair process completion error 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Write completed with error (sct=0, sc=8) 00:24:09.923 Initializing NVMe Controllers 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:09.923 Controller IO queue size 128, less than required. 00:24:09.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:09.923 Initialization complete. Launching workers. 00:24:09.923 ======================================================== 00:24:09.923 Latency(us) 00:24:09.923 Device Information : IOPS MiB/s Average min max 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2203.48 94.68 58093.77 645.73 107960.45 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2210.65 94.99 57916.28 664.71 124322.31 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2190.45 94.12 58469.08 706.54 123241.96 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2257.79 97.01 56759.61 726.61 101659.18 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2254.74 96.88 56861.61 732.23 123771.88 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2261.04 97.15 56719.83 892.42 110779.37 00:24:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2187.41 93.99 58062.43 704.73 96878.97 00:24:09.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2209.35 94.93 58062.32 965.24 114898.28 00:24:09.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2169.38 93.22 59117.82 382.16 120007.67 00:24:09.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2219.99 95.39 57195.42 788.36 94045.92 00:24:09.924 ======================================================== 00:24:09.924 Total : 22164.27 952.37 57715.77 382.16 124322.31 00:24:09.924 00:24:09.924 [2024-12-10 05:49:27.457127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a48740 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a47bc0 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49900 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49ae0 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a48410 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a47560 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49720 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a48a70 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a47ef0 is same with the state(6) to be set 00:24:09.924 [2024-12-10 05:49:27.457404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a47890 is same with the state(6) to be set 00:24:09.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:09.924 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 200292 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 200292 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 200292 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:10.860 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.861 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.861 rmmod nvme_tcp 00:24:10.861 rmmod nvme_fabrics 00:24:11.120 rmmod nvme_keyring 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 200017 ']' 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 200017 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 200017 ']' 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 200017 00:24:11.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (200017) - No such process 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 200017 is not found' 00:24:11.120 Process with pid 200017 is not found 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.120 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.023 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.023 00:24:13.023 real 0m10.408s 00:24:13.023 user 0m27.577s 00:24:13.023 sys 0m5.201s 00:24:13.023 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.023 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:13.023 ************************************ 00:24:13.023 END TEST nvmf_shutdown_tc4 00:24:13.024 ************************************ 00:24:13.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:13.024 00:24:13.024 real 0m43.611s 00:24:13.024 user 1m48.669s 00:24:13.024 sys 0m14.798s 00:24:13.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:13.024 ************************************ 00:24:13.024 END TEST nvmf_shutdown 00:24:13.024 ************************************ 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:13.283 ************************************ 00:24:13.283 START TEST nvmf_nsid 00:24:13.283 ************************************ 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:13.283 * Looking for test storage... 00:24:13.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:13.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.283 --rc genhtml_branch_coverage=1 00:24:13.283 --rc genhtml_function_coverage=1 00:24:13.283 --rc genhtml_legend=1 00:24:13.283 --rc geninfo_all_blocks=1 00:24:13.283 --rc geninfo_unexecuted_blocks=1 00:24:13.283 00:24:13.283 ' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:13.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.283 --rc genhtml_branch_coverage=1 00:24:13.283 --rc genhtml_function_coverage=1 00:24:13.283 --rc genhtml_legend=1 00:24:13.283 --rc geninfo_all_blocks=1 00:24:13.283 --rc geninfo_unexecuted_blocks=1 00:24:13.283 00:24:13.283 ' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:13.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.283 --rc genhtml_branch_coverage=1 00:24:13.283 --rc genhtml_function_coverage=1 00:24:13.283 --rc genhtml_legend=1 00:24:13.283 --rc geninfo_all_blocks=1 00:24:13.283 --rc geninfo_unexecuted_blocks=1 00:24:13.283 00:24:13.283 ' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:13.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.283 --rc genhtml_branch_coverage=1 00:24:13.283 --rc genhtml_function_coverage=1 00:24:13.283 --rc genhtml_legend=1 00:24:13.283 --rc geninfo_all_blocks=1 00:24:13.283 --rc geninfo_unexecuted_blocks=1 00:24:13.283 00:24:13.283 ' 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.283 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.543 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:20.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:20.110 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:20.110 Found net devices under 0000:af:00.0: cvl_0_0 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:20.110 Found net devices under 0000:af:00.1: cvl_0_1 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:24:20.110 00:24:20.110 --- 10.0.0.2 ping statistics --- 00:24:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.110 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:24:20.110 00:24:20.110 --- 10.0.0.1 ping statistics --- 00:24:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.110 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:20.110 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=205221 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 205221 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 205221 ']' 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.111 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:20.111 [2024-12-10 05:49:38.052760] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:20.111 [2024-12-10 05:49:38.052804] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.368 [2024-12-10 05:49:38.138313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.369 [2024-12-10 05:49:38.177733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.369 [2024-12-10 05:49:38.177771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.369 [2024-12-10 05:49:38.177778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.369 [2024-12-10 05:49:38.177784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.369 [2024-12-10 05:49:38.177789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.369 [2024-12-10 05:49:38.178321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.934 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.934 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:20.934 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.934 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.934 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=205460 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e4efa10b-edd2-41a7-b53c-eae55a106ef0 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=54d4fb6b-3479-4c07-a42c-27d34deb1a2b 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7206decb-20d7-4b4d-8bed-fe3f0f68078d 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.192 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 null0 00:24:21.192 [2024-12-10 05:49:38.957492] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:21.192 [2024-12-10 05:49:38.957537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205460 ] 00:24:21.192 null1 00:24:21.192 null2 00:24:21.192 [2024-12-10 05:49:38.971149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.192 [2024-12-10 05:49:38.995329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 205460 /var/tmp/tgt2.sock 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 205460 ']' 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:21.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.192 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 [2024-12-10 05:49:39.037328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.192 [2024-12-10 05:49:39.076830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.450 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.450 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:21.450 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:21.707 [2024-12-10 05:49:39.601722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.707 [2024-12-10 05:49:39.617799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:21.707 nvme0n1 nvme0n2 00:24:21.707 nvme1n1 00:24:21.965 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:21.965 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:21.965 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:22.896 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e4efa10b-edd2-41a7-b53c-eae55a106ef0 00:24:23.828 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e4efa10bedd241a7b53ceae55a106ef0 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E4EFA10BEDD241A7B53CEAE55A106EF0 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E4EFA10BEDD241A7B53CEAE55A106EF0 == \E\4\E\F\A\1\0\B\E\D\D\2\4\1\A\7\B\5\3\C\E\A\E\5\5\A\1\0\6\E\F\0 ]] 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 54d4fb6b-3479-4c07-a42c-27d34deb1a2b 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=54d4fb6b34794c07a42c27d34deb1a2b 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 54D4FB6B34794C07A42C27D34DEB1A2B 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 54D4FB6B34794C07A42C27D34DEB1A2B == \5\4\D\4\F\B\6\B\3\4\7\9\4\C\0\7\A\4\2\C\2\7\D\3\4\D\E\B\1\A\2\B ]] 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7206decb-20d7-4b4d-8bed-fe3f0f68078d 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7206decb20d74b4d8bedfe3f0f68078d 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7206DECB20D74B4D8BEDFE3F0F68078D 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7206DECB20D74B4D8BEDFE3F0F68078D == \7\2\0\6\D\E\C\B\2\0\D\7\4\B\4\D\8\B\E\D\F\E\3\F\0\F\6\8\0\7\8\D ]] 00:24:24.086 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 205460 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 205460 ']' 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 205460 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205460 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205460' 00:24:24.344 killing process with pid 205460 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 205460 00:24:24.344 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 205460 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.601 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.601 rmmod nvme_tcp 00:24:24.601 rmmod nvme_fabrics 00:24:24.860 rmmod nvme_keyring 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 205221 ']' 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 205221 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 205221 ']' 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 205221 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205221 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205221' 00:24:24.860 killing process with pid 205221 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 205221 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 205221 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.860 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.120 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.120 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.120 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.120 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.120 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.025 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.025 00:24:27.025 real 0m13.835s 00:24:27.025 user 0m10.689s 00:24:27.025 sys 0m6.112s 00:24:27.025 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.025 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:27.025 ************************************ 00:24:27.025 END TEST nvmf_nsid 00:24:27.025 ************************************ 00:24:27.025 05:49:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:27.025 00:24:27.025 real 12m25.314s 00:24:27.025 user 26m9.433s 00:24:27.025 sys 3m57.108s 00:24:27.025 05:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.025 05:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:27.025 ************************************ 00:24:27.025 END TEST nvmf_target_extra 00:24:27.025 ************************************ 00:24:27.025 05:49:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:27.025 05:49:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.025 05:49:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.025 05:49:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:27.285 ************************************ 00:24:27.285 START TEST nvmf_host 00:24:27.285 ************************************ 00:24:27.285 05:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:27.285 * Looking for test storage... 00:24:27.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.285 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.286 --rc genhtml_branch_coverage=1 00:24:27.286 --rc genhtml_function_coverage=1 00:24:27.286 --rc genhtml_legend=1 00:24:27.286 --rc geninfo_all_blocks=1 00:24:27.286 --rc geninfo_unexecuted_blocks=1 00:24:27.286 00:24:27.286 ' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.286 --rc genhtml_branch_coverage=1 00:24:27.286 --rc genhtml_function_coverage=1 00:24:27.286 --rc genhtml_legend=1 00:24:27.286 --rc geninfo_all_blocks=1 00:24:27.286 --rc geninfo_unexecuted_blocks=1 00:24:27.286 00:24:27.286 ' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.286 --rc genhtml_branch_coverage=1 00:24:27.286 --rc genhtml_function_coverage=1 00:24:27.286 --rc genhtml_legend=1 00:24:27.286 --rc geninfo_all_blocks=1 00:24:27.286 --rc geninfo_unexecuted_blocks=1 00:24:27.286 00:24:27.286 ' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.286 --rc genhtml_branch_coverage=1 00:24:27.286 --rc genhtml_function_coverage=1 00:24:27.286 --rc genhtml_legend=1 00:24:27.286 --rc geninfo_all_blocks=1 00:24:27.286 --rc geninfo_unexecuted_blocks=1 00:24:27.286 00:24:27.286 ' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.286 ************************************ 00:24:27.286 START TEST nvmf_multicontroller 00:24:27.286 ************************************ 00:24:27.286 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:27.546 * Looking for test storage... 00:24:27.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:27.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.546 --rc genhtml_branch_coverage=1 00:24:27.546 --rc genhtml_function_coverage=1 00:24:27.546 --rc genhtml_legend=1 00:24:27.546 --rc geninfo_all_blocks=1 00:24:27.546 --rc geninfo_unexecuted_blocks=1 00:24:27.546 00:24:27.546 ' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:27.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.546 --rc genhtml_branch_coverage=1 00:24:27.546 --rc genhtml_function_coverage=1 00:24:27.546 --rc genhtml_legend=1 00:24:27.546 --rc geninfo_all_blocks=1 00:24:27.546 --rc geninfo_unexecuted_blocks=1 00:24:27.546 00:24:27.546 ' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:27.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.546 --rc genhtml_branch_coverage=1 00:24:27.546 --rc genhtml_function_coverage=1 00:24:27.546 --rc genhtml_legend=1 00:24:27.546 --rc geninfo_all_blocks=1 00:24:27.546 --rc geninfo_unexecuted_blocks=1 00:24:27.546 00:24:27.546 ' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:27.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.546 --rc genhtml_branch_coverage=1 00:24:27.546 --rc genhtml_function_coverage=1 00:24:27.546 --rc genhtml_legend=1 00:24:27.546 --rc geninfo_all_blocks=1 00:24:27.546 --rc geninfo_unexecuted_blocks=1 00:24:27.546 00:24:27.546 ' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.546 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.547 05:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.116 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.116 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.116 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.116 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.116 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.117 05:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.117 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.117 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.117 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.117 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:24:34.375 00:24:34.375 --- 10.0.0.2 ping statistics --- 00:24:34.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.375 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:34.375 00:24:34.375 --- 10.0.0.1 ping statistics --- 00:24:34.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.375 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=210017 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 210017 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 210017 ']' 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.375 05:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.375 [2024-12-10 05:49:52.209636] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:34.375 [2024-12-10 05:49:52.209681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.375 [2024-12-10 05:49:52.293729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.633 [2024-12-10 05:49:52.333560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.633 [2024-12-10 05:49:52.333595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.633 [2024-12-10 05:49:52.333602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.633 [2024-12-10 05:49:52.333607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.633 [2024-12-10 05:49:52.333612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.633 [2024-12-10 05:49:52.334930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.633 [2024-12-10 05:49:52.335039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.633 [2024-12-10 05:49:52.335040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.197 [2024-12-10 05:49:53.089179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.197 Malloc0 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.197 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.197 [2024-12-10 05:49:53.148809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.455 [2024-12-10 05:49:53.160737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.455 Malloc1 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=210258 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 210258 /var/tmp/bdevperf.sock 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 210258 ']' 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.455 05:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.385 NVMe0n1 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.385 1 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.385 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 request: 00:24:36.644 { 00:24:36.644 "name": "NVMe0", 00:24:36.644 "trtype": "tcp", 00:24:36.644 "traddr": "10.0.0.2", 00:24:36.644 "adrfam": "ipv4", 00:24:36.644 "trsvcid": "4420", 00:24:36.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.644 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:36.644 "hostaddr": "10.0.0.1", 00:24:36.644 "prchk_reftag": false, 00:24:36.644 "prchk_guard": false, 00:24:36.644 "hdgst": false, 00:24:36.644 "ddgst": false, 00:24:36.644 "allow_unrecognized_csi": false, 00:24:36.644 "method": "bdev_nvme_attach_controller", 00:24:36.644 "req_id": 1 00:24:36.644 } 00:24:36.644 Got JSON-RPC error response 00:24:36.644 response: 00:24:36.644 { 00:24:36.644 "code": -114, 00:24:36.644 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:36.644 } 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 request: 00:24:36.644 { 00:24:36.644 "name": "NVMe0", 00:24:36.644 "trtype": "tcp", 00:24:36.644 "traddr": "10.0.0.2", 00:24:36.644 "adrfam": "ipv4", 00:24:36.644 "trsvcid": "4420", 00:24:36.644 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:36.644 "hostaddr": "10.0.0.1", 00:24:36.644 "prchk_reftag": false, 00:24:36.644 "prchk_guard": false, 00:24:36.644 "hdgst": false, 00:24:36.644 "ddgst": false, 00:24:36.644 "allow_unrecognized_csi": false, 00:24:36.644 "method": "bdev_nvme_attach_controller", 00:24:36.644 "req_id": 1 00:24:36.644 } 00:24:36.644 Got JSON-RPC error response 00:24:36.644 response: 00:24:36.644 { 00:24:36.644 "code": -114, 00:24:36.644 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:36.644 } 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 request: 00:24:36.644 { 00:24:36.644 "name": "NVMe0", 00:24:36.644 "trtype": "tcp", 00:24:36.644 "traddr": "10.0.0.2", 00:24:36.644 "adrfam": "ipv4", 00:24:36.644 "trsvcid": "4420", 00:24:36.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.644 "hostaddr": "10.0.0.1", 00:24:36.644 "prchk_reftag": false, 00:24:36.644 "prchk_guard": false, 00:24:36.644 "hdgst": false, 00:24:36.644 "ddgst": false, 00:24:36.644 "multipath": "disable", 00:24:36.644 "allow_unrecognized_csi": false, 00:24:36.644 "method": "bdev_nvme_attach_controller", 00:24:36.644 "req_id": 1 00:24:36.644 } 00:24:36.644 Got JSON-RPC error response 00:24:36.644 response: 00:24:36.644 { 00:24:36.644 "code": -114, 00:24:36.644 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:36.644 } 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.644 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.645 request: 00:24:36.645 { 00:24:36.645 "name": "NVMe0", 00:24:36.645 "trtype": "tcp", 00:24:36.645 "traddr": "10.0.0.2", 00:24:36.645 "adrfam": "ipv4", 00:24:36.645 "trsvcid": "4420", 00:24:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.645 "hostaddr": "10.0.0.1", 00:24:36.645 "prchk_reftag": false, 00:24:36.645 "prchk_guard": false, 00:24:36.645 "hdgst": false, 00:24:36.645 "ddgst": false, 00:24:36.645 "multipath": "failover", 00:24:36.645 "allow_unrecognized_csi": false, 00:24:36.645 "method": "bdev_nvme_attach_controller", 00:24:36.645 "req_id": 1 00:24:36.645 } 00:24:36.645 Got JSON-RPC error response 00:24:36.645 response: 00:24:36.645 { 00:24:36.645 "code": -114, 00:24:36.645 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:36.645 } 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.645 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.904 NVMe0n1 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.904 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:36.904 05:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.273 { 00:24:38.273 "results": [ 00:24:38.273 { 00:24:38.273 "job": "NVMe0n1", 00:24:38.273 "core_mask": "0x1", 00:24:38.273 "workload": "write", 00:24:38.273 "status": "finished", 00:24:38.273 "queue_depth": 128, 00:24:38.273 "io_size": 4096, 00:24:38.273 "runtime": 1.005484, 00:24:38.273 "iops": 25581.709902892537, 00:24:38.273 "mibps": 99.92855430817397, 00:24:38.273 "io_failed": 0, 00:24:38.273 "io_timeout": 0, 00:24:38.273 "avg_latency_us": 4992.686120682313, 00:24:38.273 "min_latency_us": 1435.5504761904763, 00:24:38.273 "max_latency_us": 8800.548571428571 00:24:38.273 } 00:24:38.273 ], 00:24:38.273 "core_count": 1 00:24:38.273 } 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 210258 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 210258 ']' 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 210258 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210258 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210258' 00:24:38.273 killing process with pid 210258 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 210258 00:24:38.273 05:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 210258 00:24:38.273 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.273 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.273 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:38.274 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:38.274 [2024-12-10 05:49:53.266095] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:38.274 [2024-12-10 05:49:53.266144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210258 ] 00:24:38.274 [2024-12-10 05:49:53.346131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.274 [2024-12-10 05:49:53.387869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.274 [2024-12-10 05:49:54.759437] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name f2db1a13-76fa-4ab5-ae8d-527d05708757 already exists 00:24:38.274 [2024-12-10 05:49:54.759465] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:f2db1a13-76fa-4ab5-ae8d-527d05708757 alias for bdev NVMe1n1 00:24:38.274 [2024-12-10 05:49:54.759473] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:38.274 Running I/O for 1 seconds... 00:24:38.274 25531.00 IOPS, 99.73 MiB/s 00:24:38.274 Latency(us) 00:24:38.274 [2024-12-10T04:49:56.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.274 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:38.274 NVMe0n1 : 1.01 25581.71 99.93 0.00 0.00 4992.69 1435.55 8800.55 00:24:38.274 [2024-12-10T04:49:56.233Z] =================================================================================================================== 00:24:38.274 [2024-12-10T04:49:56.233Z] Total : 25581.71 99.93 0.00 0.00 4992.69 1435.55 8800.55 00:24:38.274 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.274 00:24:38.274 Latency(us) 00:24:38.274 [2024-12-10T04:49:56.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.274 [2024-12-10T04:49:56.233Z] =================================================================================================================== 00:24:38.274 [2024-12-10T04:49:56.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.274 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.274 rmmod nvme_tcp 00:24:38.274 rmmod nvme_fabrics 00:24:38.274 rmmod nvme_keyring 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 210017 ']' 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 210017 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 210017 ']' 00:24:38.274 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 210017 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210017 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210017' 00:24:38.531 killing process with pid 210017 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 210017 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 210017 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.531 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.790 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.791 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.791 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.791 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.791 05:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.696 00:24:40.696 real 0m13.319s 00:24:40.696 user 0m17.643s 00:24:40.696 sys 0m5.876s 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.696 ************************************ 00:24:40.696 END TEST nvmf_multicontroller 00:24:40.696 ************************************ 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.696 ************************************ 00:24:40.696 START TEST nvmf_aer 00:24:40.696 ************************************ 00:24:40.696 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:40.956 * Looking for test storage... 00:24:40.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.956 --rc genhtml_branch_coverage=1 00:24:40.956 --rc genhtml_function_coverage=1 00:24:40.956 --rc genhtml_legend=1 00:24:40.956 --rc geninfo_all_blocks=1 00:24:40.956 --rc geninfo_unexecuted_blocks=1 00:24:40.956 00:24:40.956 ' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.956 --rc genhtml_branch_coverage=1 00:24:40.956 --rc genhtml_function_coverage=1 00:24:40.956 --rc genhtml_legend=1 00:24:40.956 --rc geninfo_all_blocks=1 00:24:40.956 --rc geninfo_unexecuted_blocks=1 00:24:40.956 00:24:40.956 ' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.956 --rc genhtml_branch_coverage=1 00:24:40.956 --rc genhtml_function_coverage=1 00:24:40.956 --rc genhtml_legend=1 00:24:40.956 --rc geninfo_all_blocks=1 00:24:40.956 --rc geninfo_unexecuted_blocks=1 00:24:40.956 00:24:40.956 ' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.956 --rc genhtml_branch_coverage=1 00:24:40.956 --rc genhtml_function_coverage=1 00:24:40.956 --rc genhtml_legend=1 00:24:40.956 --rc geninfo_all_blocks=1 00:24:40.956 --rc geninfo_unexecuted_blocks=1 00:24:40.956 00:24:40.956 ' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.956 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.957 05:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.524 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:47.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:47.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:47.525 Found net devices under 0000:af:00.0: cvl_0_0 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:47.525 Found net devices under 0000:af:00.1: cvl_0_1 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.525 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:24:47.784 00:24:47.784 --- 10.0.0.2 ping statistics --- 00:24:47.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.784 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:24:47.784 00:24:47.784 --- 10.0.0.1 ping statistics --- 00:24:47.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.784 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=214716 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 214716 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 214716 ']' 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.784 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.784 [2024-12-10 05:50:05.656274] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:47.784 [2024-12-10 05:50:05.656317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.042 [2024-12-10 05:50:05.738027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.042 [2024-12-10 05:50:05.778680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.042 [2024-12-10 05:50:05.778716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.042 [2024-12-10 05:50:05.778723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.042 [2024-12-10 05:50:05.778730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.042 [2024-12-10 05:50:05.778735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.042 [2024-12-10 05:50:05.780091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.042 [2024-12-10 05:50:05.780128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.042 [2024-12-10 05:50:05.780252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.042 [2024-12-10 05:50:05.780253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 [2024-12-10 05:50:05.916769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 Malloc0 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 [2024-12-10 05:50:05.975750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.042 [ 00:24:48.042 { 00:24:48.042 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:48.042 "subtype": "Discovery", 00:24:48.042 "listen_addresses": [], 00:24:48.042 "allow_any_host": true, 00:24:48.042 "hosts": [] 00:24:48.042 }, 00:24:48.042 { 00:24:48.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.042 "subtype": "NVMe", 00:24:48.042 "listen_addresses": [ 00:24:48.042 { 00:24:48.042 "trtype": "TCP", 00:24:48.042 "adrfam": "IPv4", 00:24:48.042 "traddr": "10.0.0.2", 00:24:48.042 "trsvcid": "4420" 00:24:48.042 } 00:24:48.042 ], 00:24:48.042 "allow_any_host": true, 00:24:48.042 "hosts": [], 00:24:48.042 "serial_number": "SPDK00000000000001", 00:24:48.042 "model_number": "SPDK bdev Controller", 00:24:48.042 "max_namespaces": 2, 00:24:48.042 "min_cntlid": 1, 00:24:48.042 "max_cntlid": 65519, 00:24:48.042 "namespaces": [ 00:24:48.042 { 00:24:48.042 "nsid": 1, 00:24:48.042 "bdev_name": "Malloc0", 00:24:48.042 "name": "Malloc0", 00:24:48.042 "nguid": "B3290C7E970C483E8E8DB361E2F71CBE", 00:24:48.042 "uuid": "b3290c7e-970c-483e-8e8d-b361e2f71cbe" 00:24:48.042 } 00:24:48.042 ] 00:24:48.042 } 00:24:48.042 ] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:48.042 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:48.299 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=214763 00:24:48.299 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:48.299 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:48.299 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:48.299 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.299 05:50:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:48.299 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.556 Malloc1 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.556 Asynchronous Event Request test 00:24:48.556 Attaching to 10.0.0.2 00:24:48.556 Attached to 10.0.0.2 00:24:48.556 Registering asynchronous event callbacks... 00:24:48.556 Starting namespace attribute notice tests for all controllers... 00:24:48.556 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:48.556 aer_cb - Changed Namespace 00:24:48.556 Cleaning up... 00:24:48.556 [ 00:24:48.556 { 00:24:48.556 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:48.556 "subtype": "Discovery", 00:24:48.556 "listen_addresses": [], 00:24:48.556 "allow_any_host": true, 00:24:48.556 "hosts": [] 00:24:48.556 }, 00:24:48.556 { 00:24:48.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.556 "subtype": "NVMe", 00:24:48.556 "listen_addresses": [ 00:24:48.556 { 00:24:48.556 "trtype": "TCP", 00:24:48.556 "adrfam": "IPv4", 00:24:48.556 "traddr": "10.0.0.2", 00:24:48.556 "trsvcid": "4420" 00:24:48.556 } 00:24:48.556 ], 00:24:48.556 "allow_any_host": true, 00:24:48.556 "hosts": [], 00:24:48.556 "serial_number": "SPDK00000000000001", 00:24:48.556 "model_number": "SPDK bdev Controller", 00:24:48.556 "max_namespaces": 2, 00:24:48.556 "min_cntlid": 1, 00:24:48.556 "max_cntlid": 65519, 00:24:48.556 "namespaces": [ 00:24:48.556 { 00:24:48.556 "nsid": 1, 00:24:48.556 "bdev_name": "Malloc0", 00:24:48.556 "name": "Malloc0", 00:24:48.556 "nguid": "B3290C7E970C483E8E8DB361E2F71CBE", 00:24:48.556 "uuid": "b3290c7e-970c-483e-8e8d-b361e2f71cbe" 00:24:48.556 }, 00:24:48.556 { 00:24:48.556 "nsid": 2, 00:24:48.556 "bdev_name": "Malloc1", 00:24:48.556 "name": "Malloc1", 00:24:48.556 "nguid": "A972822CA6D04DCCB3E566FC2311D048", 00:24:48.556 "uuid": "a972822c-a6d0-4dcc-b3e5-66fc2311d048" 00:24:48.556 } 00:24:48.556 ] 00:24:48.556 } 00:24:48.556 ] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 214763 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:48.556 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.557 rmmod nvme_tcp 00:24:48.557 rmmod nvme_fabrics 00:24:48.557 rmmod nvme_keyring 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 214716 ']' 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 214716 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 214716 ']' 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 214716 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.557 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214716 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214716' 00:24:48.815 killing process with pid 214716 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 214716 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 214716 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.815 05:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.350 00:24:51.350 real 0m10.159s 00:24:51.350 user 0m5.729s 00:24:51.350 sys 0m5.448s 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:51.350 ************************************ 00:24:51.350 END TEST nvmf_aer 00:24:51.350 ************************************ 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.350 ************************************ 00:24:51.350 START TEST nvmf_async_init 00:24:51.350 ************************************ 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:51.350 * Looking for test storage... 00:24:51.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:24:51.350 05:50:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:51.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.350 --rc genhtml_branch_coverage=1 00:24:51.350 --rc genhtml_function_coverage=1 00:24:51.350 --rc genhtml_legend=1 00:24:51.350 --rc geninfo_all_blocks=1 00:24:51.350 --rc geninfo_unexecuted_blocks=1 00:24:51.350 00:24:51.350 ' 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:51.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.350 --rc genhtml_branch_coverage=1 00:24:51.350 --rc genhtml_function_coverage=1 00:24:51.350 --rc genhtml_legend=1 00:24:51.350 --rc geninfo_all_blocks=1 00:24:51.350 --rc geninfo_unexecuted_blocks=1 00:24:51.350 00:24:51.350 ' 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:51.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.350 --rc genhtml_branch_coverage=1 00:24:51.350 --rc genhtml_function_coverage=1 00:24:51.350 --rc genhtml_legend=1 00:24:51.350 --rc geninfo_all_blocks=1 00:24:51.350 --rc geninfo_unexecuted_blocks=1 00:24:51.350 00:24:51.350 ' 00:24:51.350 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:51.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.350 --rc genhtml_branch_coverage=1 00:24:51.351 --rc genhtml_function_coverage=1 00:24:51.351 --rc genhtml_legend=1 00:24:51.351 --rc geninfo_all_blocks=1 00:24:51.351 --rc geninfo_unexecuted_blocks=1 00:24:51.351 00:24:51.351 ' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a62009b2d15b44c8bfb5708d5c8bdcd2 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.351 05:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:57.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:57.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.916 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:57.917 Found net devices under 0000:af:00.0: cvl_0_0 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:57.917 Found net devices under 0000:af:00.1: cvl_0_1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:24:57.917 00:24:57.917 --- 10.0.0.2 ping statistics --- 00:24:57.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.917 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:57.917 00:24:57.917 --- 10.0.0.1 ping statistics --- 00:24:57.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.917 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=218771 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 218771 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 218771 ']' 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.917 05:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.175 [2024-12-10 05:50:15.883151] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:24:58.175 [2024-12-10 05:50:15.883191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.175 [2024-12-10 05:50:15.966101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.176 [2024-12-10 05:50:16.002979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.176 [2024-12-10 05:50:16.003012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.176 [2024-12-10 05:50:16.003018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.176 [2024-12-10 05:50:16.003024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.176 [2024-12-10 05:50:16.003029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.176 [2024-12-10 05:50:16.003567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.110 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.110 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:59.110 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.110 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.110 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.110 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 [2024-12-10 05:50:16.744854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 null0 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a62009b2d15b44c8bfb5708d5c8bdcd2 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 [2024-12-10 05:50:16.797086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 nvme0n1 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 [ 00:24:59.111 { 00:24:59.111 "name": "nvme0n1", 00:24:59.111 "aliases": [ 00:24:59.111 "a62009b2-d15b-44c8-bfb5-708d5c8bdcd2" 00:24:59.111 ], 00:24:59.111 "product_name": "NVMe disk", 00:24:59.111 "block_size": 512, 00:24:59.111 "num_blocks": 2097152, 00:24:59.111 "uuid": "a62009b2-d15b-44c8-bfb5-708d5c8bdcd2", 00:24:59.111 "numa_id": 1, 00:24:59.111 "assigned_rate_limits": { 00:24:59.111 "rw_ios_per_sec": 0, 00:24:59.111 "rw_mbytes_per_sec": 0, 00:24:59.111 "r_mbytes_per_sec": 0, 00:24:59.111 "w_mbytes_per_sec": 0 00:24:59.111 }, 00:24:59.111 "claimed": false, 00:24:59.111 "zoned": false, 00:24:59.111 "supported_io_types": { 00:24:59.111 "read": true, 00:24:59.111 "write": true, 00:24:59.111 "unmap": false, 00:24:59.111 "flush": true, 00:24:59.111 "reset": true, 00:24:59.111 "nvme_admin": true, 00:24:59.111 "nvme_io": true, 00:24:59.111 "nvme_io_md": false, 00:24:59.111 "write_zeroes": true, 00:24:59.111 "zcopy": false, 00:24:59.111 "get_zone_info": false, 00:24:59.111 "zone_management": false, 00:24:59.111 "zone_append": false, 00:24:59.111 "compare": true, 00:24:59.111 "compare_and_write": true, 00:24:59.111 "abort": true, 00:24:59.111 "seek_hole": false, 00:24:59.111 "seek_data": false, 00:24:59.111 "copy": true, 00:24:59.111 "nvme_iov_md": false 00:24:59.111 }, 00:24:59.111 "memory_domains": [ 00:24:59.111 { 00:24:59.111 "dma_device_id": "system", 00:24:59.111 "dma_device_type": 1 00:24:59.111 } 00:24:59.111 ], 00:24:59.111 "driver_specific": { 00:24:59.111 "nvme": [ 00:24:59.111 { 00:24:59.111 "trid": { 00:24:59.111 "trtype": "TCP", 00:24:59.111 "adrfam": "IPv4", 00:24:59.111 "traddr": "10.0.0.2", 00:24:59.111 "trsvcid": "4420", 00:24:59.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:59.111 }, 00:24:59.111 "ctrlr_data": { 00:24:59.111 "cntlid": 1, 00:24:59.111 "vendor_id": "0x8086", 00:24:59.111 "model_number": "SPDK bdev Controller", 00:24:59.111 "serial_number": "00000000000000000000", 00:24:59.111 "firmware_revision": "25.01", 00:24:59.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:59.111 "oacs": { 00:24:59.111 "security": 0, 00:24:59.111 "format": 0, 00:24:59.111 "firmware": 0, 00:24:59.111 "ns_manage": 0 00:24:59.111 }, 00:24:59.111 "multi_ctrlr": true, 00:24:59.111 "ana_reporting": false 00:24:59.111 }, 00:24:59.111 "vs": { 00:24:59.111 "nvme_version": "1.3" 00:24:59.111 }, 00:24:59.111 "ns_data": { 00:24:59.111 "id": 1, 00:24:59.111 "can_share": true 00:24:59.111 } 00:24:59.111 } 00:24:59.111 ], 00:24:59.111 "mp_policy": "active_passive" 00:24:59.111 } 00:24:59.111 } 00:24:59.111 ] 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 [2024-12-10 05:50:17.061665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.111 [2024-12-10 05:50:17.061725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25dd8f0 (9): Bad file descriptor 00:24:59.369 [2024-12-10 05:50:17.193308] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 [ 00:24:59.370 { 00:24:59.370 "name": "nvme0n1", 00:24:59.370 "aliases": [ 00:24:59.370 "a62009b2-d15b-44c8-bfb5-708d5c8bdcd2" 00:24:59.370 ], 00:24:59.370 "product_name": "NVMe disk", 00:24:59.370 "block_size": 512, 00:24:59.370 "num_blocks": 2097152, 00:24:59.370 "uuid": "a62009b2-d15b-44c8-bfb5-708d5c8bdcd2", 00:24:59.370 "numa_id": 1, 00:24:59.370 "assigned_rate_limits": { 00:24:59.370 "rw_ios_per_sec": 0, 00:24:59.370 "rw_mbytes_per_sec": 0, 00:24:59.370 "r_mbytes_per_sec": 0, 00:24:59.370 "w_mbytes_per_sec": 0 00:24:59.370 }, 00:24:59.370 "claimed": false, 00:24:59.370 "zoned": false, 00:24:59.370 "supported_io_types": { 00:24:59.370 "read": true, 00:24:59.370 "write": true, 00:24:59.370 "unmap": false, 00:24:59.370 "flush": true, 00:24:59.370 "reset": true, 00:24:59.370 "nvme_admin": true, 00:24:59.370 "nvme_io": true, 00:24:59.370 "nvme_io_md": false, 00:24:59.370 "write_zeroes": true, 00:24:59.370 "zcopy": false, 00:24:59.370 "get_zone_info": false, 00:24:59.370 "zone_management": false, 00:24:59.370 "zone_append": false, 00:24:59.370 "compare": true, 00:24:59.370 "compare_and_write": true, 00:24:59.370 "abort": true, 00:24:59.370 "seek_hole": false, 00:24:59.370 "seek_data": false, 00:24:59.370 "copy": true, 00:24:59.370 "nvme_iov_md": false 00:24:59.370 }, 00:24:59.370 "memory_domains": [ 00:24:59.370 { 00:24:59.370 "dma_device_id": "system", 00:24:59.370 "dma_device_type": 1 00:24:59.370 } 00:24:59.370 ], 00:24:59.370 "driver_specific": { 00:24:59.370 "nvme": [ 00:24:59.370 { 00:24:59.370 "trid": { 00:24:59.370 "trtype": "TCP", 00:24:59.370 "adrfam": "IPv4", 00:24:59.370 "traddr": "10.0.0.2", 00:24:59.370 "trsvcid": "4420", 00:24:59.370 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:59.370 }, 00:24:59.370 "ctrlr_data": { 00:24:59.370 "cntlid": 2, 00:24:59.370 "vendor_id": "0x8086", 00:24:59.370 "model_number": "SPDK bdev Controller", 00:24:59.370 "serial_number": "00000000000000000000", 00:24:59.370 "firmware_revision": "25.01", 00:24:59.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:59.370 "oacs": { 00:24:59.370 "security": 0, 00:24:59.370 "format": 0, 00:24:59.370 "firmware": 0, 00:24:59.370 "ns_manage": 0 00:24:59.370 }, 00:24:59.370 "multi_ctrlr": true, 00:24:59.370 "ana_reporting": false 00:24:59.370 }, 00:24:59.370 "vs": { 00:24:59.370 "nvme_version": "1.3" 00:24:59.370 }, 00:24:59.370 "ns_data": { 00:24:59.370 "id": 1, 00:24:59.370 "can_share": true 00:24:59.370 } 00:24:59.370 } 00:24:59.370 ], 00:24:59.370 "mp_policy": "active_passive" 00:24:59.370 } 00:24:59.370 } 00:24:59.370 ] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oHUyYDrr5O 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oHUyYDrr5O 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.oHUyYDrr5O 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 [2024-12-10 05:50:17.266286] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.370 [2024-12-10 05:50:17.266380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.370 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.370 [2024-12-10 05:50:17.286333] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.640 nvme0n1 00:24:59.640 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.640 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:59.640 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.640 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.640 [ 00:24:59.640 { 00:24:59.640 "name": "nvme0n1", 00:24:59.640 "aliases": [ 00:24:59.640 "a62009b2-d15b-44c8-bfb5-708d5c8bdcd2" 00:24:59.640 ], 00:24:59.640 "product_name": "NVMe disk", 00:24:59.640 "block_size": 512, 00:24:59.640 "num_blocks": 2097152, 00:24:59.640 "uuid": "a62009b2-d15b-44c8-bfb5-708d5c8bdcd2", 00:24:59.640 "numa_id": 1, 00:24:59.640 "assigned_rate_limits": { 00:24:59.640 "rw_ios_per_sec": 0, 00:24:59.640 "rw_mbytes_per_sec": 0, 00:24:59.640 "r_mbytes_per_sec": 0, 00:24:59.640 "w_mbytes_per_sec": 0 00:24:59.640 }, 00:24:59.640 "claimed": false, 00:24:59.640 "zoned": false, 00:24:59.640 "supported_io_types": { 00:24:59.640 "read": true, 00:24:59.640 "write": true, 00:24:59.640 "unmap": false, 00:24:59.640 "flush": true, 00:24:59.640 "reset": true, 00:24:59.640 "nvme_admin": true, 00:24:59.640 "nvme_io": true, 00:24:59.640 "nvme_io_md": false, 00:24:59.640 "write_zeroes": true, 00:24:59.640 "zcopy": false, 00:24:59.640 "get_zone_info": false, 00:24:59.640 "zone_management": false, 00:24:59.640 "zone_append": false, 00:24:59.640 "compare": true, 00:24:59.640 "compare_and_write": true, 00:24:59.640 "abort": true, 00:24:59.640 "seek_hole": false, 00:24:59.640 "seek_data": false, 00:24:59.640 "copy": true, 00:24:59.640 "nvme_iov_md": false 00:24:59.640 }, 00:24:59.640 "memory_domains": [ 00:24:59.640 { 00:24:59.640 "dma_device_id": "system", 00:24:59.641 "dma_device_type": 1 00:24:59.641 } 00:24:59.641 ], 00:24:59.641 "driver_specific": { 00:24:59.641 "nvme": [ 00:24:59.641 { 00:24:59.641 "trid": { 00:24:59.641 "trtype": "TCP", 00:24:59.641 "adrfam": "IPv4", 00:24:59.641 "traddr": "10.0.0.2", 00:24:59.641 "trsvcid": "4421", 00:24:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:59.641 }, 00:24:59.641 "ctrlr_data": { 00:24:59.641 "cntlid": 3, 00:24:59.641 "vendor_id": "0x8086", 00:24:59.641 "model_number": "SPDK bdev Controller", 00:24:59.641 "serial_number": "00000000000000000000", 00:24:59.641 "firmware_revision": "25.01", 00:24:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:59.641 "oacs": { 00:24:59.641 "security": 0, 00:24:59.641 "format": 0, 00:24:59.641 "firmware": 0, 00:24:59.641 "ns_manage": 0 00:24:59.641 }, 00:24:59.641 "multi_ctrlr": true, 00:24:59.641 "ana_reporting": false 00:24:59.641 }, 00:24:59.641 "vs": { 00:24:59.641 "nvme_version": "1.3" 00:24:59.641 }, 00:24:59.641 "ns_data": { 00:24:59.641 "id": 1, 00:24:59.641 "can_share": true 00:24:59.641 } 00:24:59.641 } 00:24:59.641 ], 00:24:59.641 "mp_policy": "active_passive" 00:24:59.641 } 00:24:59.641 } 00:24:59.641 ] 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.oHUyYDrr5O 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.641 rmmod nvme_tcp 00:24:59.641 rmmod nvme_fabrics 00:24:59.641 rmmod nvme_keyring 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 218771 ']' 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 218771 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 218771 ']' 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 218771 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218771 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218771' 00:24:59.641 killing process with pid 218771 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 218771 00:24:59.641 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 218771 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.965 05:50:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.872 00:25:01.872 real 0m10.864s 00:25:01.872 user 0m4.092s 00:25:01.872 sys 0m5.382s 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:01.872 ************************************ 00:25:01.872 END TEST nvmf_async_init 00:25:01.872 ************************************ 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.872 ************************************ 00:25:01.872 START TEST dma 00:25:01.872 ************************************ 00:25:01.872 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:02.131 * Looking for test storage... 00:25:02.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.132 --rc genhtml_branch_coverage=1 00:25:02.132 --rc genhtml_function_coverage=1 00:25:02.132 --rc genhtml_legend=1 00:25:02.132 --rc geninfo_all_blocks=1 00:25:02.132 --rc geninfo_unexecuted_blocks=1 00:25:02.132 00:25:02.132 ' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.132 --rc genhtml_branch_coverage=1 00:25:02.132 --rc genhtml_function_coverage=1 00:25:02.132 --rc genhtml_legend=1 00:25:02.132 --rc geninfo_all_blocks=1 00:25:02.132 --rc geninfo_unexecuted_blocks=1 00:25:02.132 00:25:02.132 ' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.132 --rc genhtml_branch_coverage=1 00:25:02.132 --rc genhtml_function_coverage=1 00:25:02.132 --rc genhtml_legend=1 00:25:02.132 --rc geninfo_all_blocks=1 00:25:02.132 --rc geninfo_unexecuted_blocks=1 00:25:02.132 00:25:02.132 ' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:02.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.132 --rc genhtml_branch_coverage=1 00:25:02.132 --rc genhtml_function_coverage=1 00:25:02.132 --rc genhtml_legend=1 00:25:02.132 --rc geninfo_all_blocks=1 00:25:02.132 --rc geninfo_unexecuted_blocks=1 00:25:02.132 00:25:02.132 ' 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.132 05:50:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:02.132 00:25:02.132 real 0m0.213s 00:25:02.132 user 0m0.125s 00:25:02.132 sys 0m0.100s 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:02.132 ************************************ 00:25:02.132 END TEST dma 00:25:02.132 ************************************ 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.132 05:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.393 ************************************ 00:25:02.393 START TEST nvmf_identify 00:25:02.393 ************************************ 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:02.393 * Looking for test storage... 00:25:02.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:02.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.393 --rc genhtml_branch_coverage=1 00:25:02.393 --rc genhtml_function_coverage=1 00:25:02.393 --rc genhtml_legend=1 00:25:02.393 --rc geninfo_all_blocks=1 00:25:02.393 --rc geninfo_unexecuted_blocks=1 00:25:02.393 00:25:02.393 ' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:02.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.393 --rc genhtml_branch_coverage=1 00:25:02.393 --rc genhtml_function_coverage=1 00:25:02.393 --rc genhtml_legend=1 00:25:02.393 --rc geninfo_all_blocks=1 00:25:02.393 --rc geninfo_unexecuted_blocks=1 00:25:02.393 00:25:02.393 ' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:02.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.393 --rc genhtml_branch_coverage=1 00:25:02.393 --rc genhtml_function_coverage=1 00:25:02.393 --rc genhtml_legend=1 00:25:02.393 --rc geninfo_all_blocks=1 00:25:02.393 --rc geninfo_unexecuted_blocks=1 00:25:02.393 00:25:02.393 ' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:02.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.393 --rc genhtml_branch_coverage=1 00:25:02.393 --rc genhtml_function_coverage=1 00:25:02.393 --rc genhtml_legend=1 00:25:02.393 --rc geninfo_all_blocks=1 00:25:02.393 --rc geninfo_unexecuted_blocks=1 00:25:02.393 00:25:02.393 ' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:02.393 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.394 05:50:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.960 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.960 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.960 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.219 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.219 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.219 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.219 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.219 05:50:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:25:09.219 00:25:09.219 --- 10.0.0.2 ping statistics --- 00:25:09.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.219 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:09.219 00:25:09.219 --- 10.0.0.1 ping statistics --- 00:25:09.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.219 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=223020 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 223020 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 223020 ']' 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.219 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 [2024-12-10 05:50:27.108979] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:25:09.219 [2024-12-10 05:50:27.109023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.478 [2024-12-10 05:50:27.193796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:09.478 [2024-12-10 05:50:27.235386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.478 [2024-12-10 05:50:27.235424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.478 [2024-12-10 05:50:27.235431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.478 [2024-12-10 05:50:27.235437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.478 [2024-12-10 05:50:27.235443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.478 [2024-12-10 05:50:27.236973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.478 [2024-12-10 05:50:27.237080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.478 [2024-12-10 05:50:27.237185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.478 [2024-12-10 05:50:27.237185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.043 [2024-12-10 05:50:27.959476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.043 05:50:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 Malloc0 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 [2024-12-10 05:50:28.063624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 [ 00:25:10.303 { 00:25:10.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:10.303 "subtype": "Discovery", 00:25:10.303 "listen_addresses": [ 00:25:10.303 { 00:25:10.303 "trtype": "TCP", 00:25:10.303 "adrfam": "IPv4", 00:25:10.303 "traddr": "10.0.0.2", 00:25:10.303 "trsvcid": "4420" 00:25:10.303 } 00:25:10.303 ], 00:25:10.303 "allow_any_host": true, 00:25:10.303 "hosts": [] 00:25:10.303 }, 00:25:10.303 { 00:25:10.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.303 "subtype": "NVMe", 00:25:10.303 "listen_addresses": [ 00:25:10.303 { 00:25:10.303 "trtype": "TCP", 00:25:10.303 "adrfam": "IPv4", 00:25:10.303 "traddr": "10.0.0.2", 00:25:10.303 "trsvcid": "4420" 00:25:10.303 } 00:25:10.303 ], 00:25:10.303 "allow_any_host": true, 00:25:10.303 "hosts": [], 00:25:10.303 "serial_number": "SPDK00000000000001", 00:25:10.303 "model_number": "SPDK bdev Controller", 00:25:10.303 "max_namespaces": 32, 00:25:10.303 "min_cntlid": 1, 00:25:10.303 "max_cntlid": 65519, 00:25:10.303 "namespaces": [ 00:25:10.303 { 00:25:10.303 "nsid": 1, 00:25:10.303 "bdev_name": "Malloc0", 00:25:10.303 "name": "Malloc0", 00:25:10.303 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:10.303 "eui64": "ABCDEF0123456789", 00:25:10.303 "uuid": "12143d34-af81-4e63-ad36-dcbd0e6bb3b1" 00:25:10.303 } 00:25:10.303 ] 00:25:10.303 } 00:25:10.303 ] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.303 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:10.303 [2024-12-10 05:50:28.119691] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:25:10.303 [2024-12-10 05:50:28.119731] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223112 ] 00:25:10.303 [2024-12-10 05:50:28.159758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:10.303 [2024-12-10 05:50:28.159798] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:10.303 [2024-12-10 05:50:28.159803] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:10.303 [2024-12-10 05:50:28.159813] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:10.303 [2024-12-10 05:50:28.159822] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:10.303 [2024-12-10 05:50:28.163436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:10.303 [2024-12-10 05:50:28.163474] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e10690 0 00:25:10.303 [2024-12-10 05:50:28.163592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:10.303 [2024-12-10 05:50:28.163600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:10.303 [2024-12-10 05:50:28.163605] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:10.303 [2024-12-10 05:50:28.163609] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:10.303 [2024-12-10 05:50:28.163635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.303 [2024-12-10 05:50:28.163641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.303 [2024-12-10 05:50:28.163646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.303 [2024-12-10 05:50:28.163659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:10.303 [2024-12-10 05:50:28.163676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.303 [2024-12-10 05:50:28.171230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.303 [2024-12-10 05:50:28.171240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.171243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.171258] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:10.304 [2024-12-10 05:50:28.171265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:10.304 [2024-12-10 05:50:28.171270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:10.304 [2024-12-10 05:50:28.171283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.171297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.171310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.171482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.171488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.171491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.171499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:10.304 [2024-12-10 05:50:28.171506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:10.304 [2024-12-10 05:50:28.171513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.171525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.171535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.171597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.171602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.171605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.171614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:10.304 [2024-12-10 05:50:28.171621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:10.304 [2024-12-10 05:50:28.171627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.171639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.171648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.171709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.171715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.171718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.171725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:10.304 [2024-12-10 05:50:28.171733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.171749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.171758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.171831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.171837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.171840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.171848] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:10.304 [2024-12-10 05:50:28.171852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:10.304 [2024-12-10 05:50:28.171858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:10.304 [2024-12-10 05:50:28.171966] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:10.304 [2024-12-10 05:50:28.171971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:10.304 [2024-12-10 05:50:28.171978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.171984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.171989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.171999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.172056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.172062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.172065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.172072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:10.304 [2024-12-10 05:50:28.172080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.172092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.172101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.172157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.172163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.172166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.172175] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:10.304 [2024-12-10 05:50:28.172180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:10.304 [2024-12-10 05:50:28.172187] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:10.304 [2024-12-10 05:50:28.172194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:10.304 [2024-12-10 05:50:28.172202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.304 [2024-12-10 05:50:28.172211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.304 [2024-12-10 05:50:28.172227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.304 [2024-12-10 05:50:28.172317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.304 [2024-12-10 05:50:28.172323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.304 [2024-12-10 05:50:28.172327] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172330] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e10690): datao=0, datal=4096, cccid=0 00:25:10.304 [2024-12-10 05:50:28.172334] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e72100) on tqpair(0x1e10690): expected_datao=0, payload_size=4096 00:25:10.304 [2024-12-10 05:50:28.172338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172352] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172356] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.304 [2024-12-10 05:50:28.172395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.304 [2024-12-10 05:50:28.172398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.304 [2024-12-10 05:50:28.172402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.304 [2024-12-10 05:50:28.172409] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:10.304 [2024-12-10 05:50:28.172413] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:10.304 [2024-12-10 05:50:28.172416] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:10.304 [2024-12-10 05:50:28.172421] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:10.304 [2024-12-10 05:50:28.172425] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:10.304 [2024-12-10 05:50:28.172430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:10.304 [2024-12-10 05:50:28.172439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:10.305 [2024-12-10 05:50:28.172447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:10.305 [2024-12-10 05:50:28.172470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.305 [2024-12-10 05:50:28.172535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.305 [2024-12-10 05:50:28.172545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.305 [2024-12-10 05:50:28.172548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.305 [2024-12-10 05:50:28.172558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.305 [2024-12-10 05:50:28.172575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.305 [2024-12-10 05:50:28.172591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.305 [2024-12-10 05:50:28.172607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.305 [2024-12-10 05:50:28.172622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:10.305 [2024-12-10 05:50:28.172632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:10.305 [2024-12-10 05:50:28.172638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.305 [2024-12-10 05:50:28.172658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72100, cid 0, qid 0 00:25:10.305 [2024-12-10 05:50:28.172663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72280, cid 1, qid 0 00:25:10.305 [2024-12-10 05:50:28.172667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72400, cid 2, qid 0 00:25:10.305 [2024-12-10 05:50:28.172671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.305 [2024-12-10 05:50:28.172675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72700, cid 4, qid 0 00:25:10.305 [2024-12-10 05:50:28.172768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.305 [2024-12-10 05:50:28.172774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.305 [2024-12-10 05:50:28.172777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72700) on tqpair=0x1e10690 00:25:10.305 [2024-12-10 05:50:28.172785] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:10.305 [2024-12-10 05:50:28.172791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:10.305 [2024-12-10 05:50:28.172800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.305 [2024-12-10 05:50:28.172819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72700, cid 4, qid 0 00:25:10.305 [2024-12-10 05:50:28.172894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.305 [2024-12-10 05:50:28.172900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.305 [2024-12-10 05:50:28.172903] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172906] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e10690): datao=0, datal=4096, cccid=4 00:25:10.305 [2024-12-10 05:50:28.172910] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e72700) on tqpair(0x1e10690): expected_datao=0, payload_size=4096 00:25:10.305 [2024-12-10 05:50:28.172914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172919] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172922] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.305 [2024-12-10 05:50:28.172939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.305 [2024-12-10 05:50:28.172941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72700) on tqpair=0x1e10690 00:25:10.305 [2024-12-10 05:50:28.172956] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:10.305 [2024-12-10 05:50:28.172977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.172987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.305 [2024-12-10 05:50:28.172993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.172999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.173004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.305 [2024-12-10 05:50:28.173017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72700, cid 4, qid 0 00:25:10.305 [2024-12-10 05:50:28.173022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72880, cid 5, qid 0 00:25:10.305 [2024-12-10 05:50:28.173120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.305 [2024-12-10 05:50:28.173126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.305 [2024-12-10 05:50:28.173129] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.173132] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e10690): datao=0, datal=1024, cccid=4 00:25:10.305 [2024-12-10 05:50:28.173136] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e72700) on tqpair(0x1e10690): expected_datao=0, payload_size=1024 00:25:10.305 [2024-12-10 05:50:28.173139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.173145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.173151] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.173156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.305 [2024-12-10 05:50:28.173161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.305 [2024-12-10 05:50:28.173164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.173167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72880) on tqpair=0x1e10690 00:25:10.305 [2024-12-10 05:50:28.213338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.305 [2024-12-10 05:50:28.213352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.305 [2024-12-10 05:50:28.213355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72700) on tqpair=0x1e10690 00:25:10.305 [2024-12-10 05:50:28.213372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.213384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.305 [2024-12-10 05:50:28.213400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72700, cid 4, qid 0 00:25:10.305 [2024-12-10 05:50:28.213473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.305 [2024-12-10 05:50:28.213479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.305 [2024-12-10 05:50:28.213482] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e10690): datao=0, datal=3072, cccid=4 00:25:10.305 [2024-12-10 05:50:28.213489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e72700) on tqpair(0x1e10690): expected_datao=0, payload_size=3072 00:25:10.305 [2024-12-10 05:50:28.213493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213499] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213503] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.305 [2024-12-10 05:50:28.213533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.305 [2024-12-10 05:50:28.213536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72700) on tqpair=0x1e10690 00:25:10.305 [2024-12-10 05:50:28.213547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.305 [2024-12-10 05:50:28.213551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e10690) 00:25:10.305 [2024-12-10 05:50:28.213556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.305 [2024-12-10 05:50:28.213570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72700, cid 4, qid 0 00:25:10.305 [2024-12-10 05:50:28.213641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.305 [2024-12-10 05:50:28.213646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.306 [2024-12-10 05:50:28.213649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.306 [2024-12-10 05:50:28.213652] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e10690): datao=0, datal=8, cccid=4 00:25:10.306 [2024-12-10 05:50:28.213656] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e72700) on tqpair(0x1e10690): expected_datao=0, payload_size=8 00:25:10.306 [2024-12-10 05:50:28.213659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.306 [2024-12-10 05:50:28.213665] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.306 [2024-12-10 05:50:28.213668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.571 [2024-12-10 05:50:28.258229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.571 [2024-12-10 05:50:28.258247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.571 [2024-12-10 05:50:28.258250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.571 [2024-12-10 05:50:28.258254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72700) on tqpair=0x1e10690 00:25:10.571 ===================================================== 00:25:10.571 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:10.571 ===================================================== 00:25:10.571 Controller Capabilities/Features 00:25:10.571 ================================ 00:25:10.571 Vendor ID: 0000 00:25:10.571 Subsystem Vendor ID: 0000 00:25:10.571 Serial Number: .................... 00:25:10.571 Model Number: ........................................ 00:25:10.571 Firmware Version: 25.01 00:25:10.571 Recommended Arb Burst: 0 00:25:10.571 IEEE OUI Identifier: 00 00 00 00:25:10.571 Multi-path I/O 00:25:10.571 May have multiple subsystem ports: No 00:25:10.571 May have multiple controllers: No 00:25:10.571 Associated with SR-IOV VF: No 00:25:10.571 Max Data Transfer Size: 131072 00:25:10.571 Max Number of Namespaces: 0 00:25:10.571 Max Number of I/O Queues: 1024 00:25:10.571 NVMe Specification Version (VS): 1.3 00:25:10.571 NVMe Specification Version (Identify): 1.3 00:25:10.571 Maximum Queue Entries: 128 00:25:10.571 Contiguous Queues Required: Yes 00:25:10.571 Arbitration Mechanisms Supported 00:25:10.571 Weighted Round Robin: Not Supported 00:25:10.571 Vendor Specific: Not Supported 00:25:10.571 Reset Timeout: 15000 ms 00:25:10.571 Doorbell Stride: 4 bytes 00:25:10.571 NVM Subsystem Reset: Not Supported 00:25:10.571 Command Sets Supported 00:25:10.571 NVM Command Set: Supported 00:25:10.571 Boot Partition: Not Supported 00:25:10.571 Memory Page Size Minimum: 4096 bytes 00:25:10.571 Memory Page Size Maximum: 4096 bytes 00:25:10.571 Persistent Memory Region: Not Supported 00:25:10.571 Optional Asynchronous Events Supported 00:25:10.571 Namespace Attribute Notices: Not Supported 00:25:10.571 Firmware Activation Notices: Not Supported 00:25:10.571 ANA Change Notices: Not Supported 00:25:10.571 PLE Aggregate Log Change Notices: Not Supported 00:25:10.571 LBA Status Info Alert Notices: Not Supported 00:25:10.571 EGE Aggregate Log Change Notices: Not Supported 00:25:10.571 Normal NVM Subsystem Shutdown event: Not Supported 00:25:10.571 Zone Descriptor Change Notices: Not Supported 00:25:10.571 Discovery Log Change Notices: Supported 00:25:10.571 Controller Attributes 00:25:10.571 128-bit Host Identifier: Not Supported 00:25:10.571 Non-Operational Permissive Mode: Not Supported 00:25:10.571 NVM Sets: Not Supported 00:25:10.571 Read Recovery Levels: Not Supported 00:25:10.571 Endurance Groups: Not Supported 00:25:10.571 Predictable Latency Mode: Not Supported 00:25:10.571 Traffic Based Keep ALive: Not Supported 00:25:10.571 Namespace Granularity: Not Supported 00:25:10.571 SQ Associations: Not Supported 00:25:10.571 UUID List: Not Supported 00:25:10.571 Multi-Domain Subsystem: Not Supported 00:25:10.571 Fixed Capacity Management: Not Supported 00:25:10.571 Variable Capacity Management: Not Supported 00:25:10.571 Delete Endurance Group: Not Supported 00:25:10.571 Delete NVM Set: Not Supported 00:25:10.571 Extended LBA Formats Supported: Not Supported 00:25:10.571 Flexible Data Placement Supported: Not Supported 00:25:10.571 00:25:10.571 Controller Memory Buffer Support 00:25:10.571 ================================ 00:25:10.571 Supported: No 00:25:10.571 00:25:10.571 Persistent Memory Region Support 00:25:10.571 ================================ 00:25:10.571 Supported: No 00:25:10.571 00:25:10.571 Admin Command Set Attributes 00:25:10.571 ============================ 00:25:10.571 Security Send/Receive: Not Supported 00:25:10.571 Format NVM: Not Supported 00:25:10.571 Firmware Activate/Download: Not Supported 00:25:10.571 Namespace Management: Not Supported 00:25:10.571 Device Self-Test: Not Supported 00:25:10.571 Directives: Not Supported 00:25:10.571 NVMe-MI: Not Supported 00:25:10.571 Virtualization Management: Not Supported 00:25:10.571 Doorbell Buffer Config: Not Supported 00:25:10.571 Get LBA Status Capability: Not Supported 00:25:10.571 Command & Feature Lockdown Capability: Not Supported 00:25:10.571 Abort Command Limit: 1 00:25:10.571 Async Event Request Limit: 4 00:25:10.571 Number of Firmware Slots: N/A 00:25:10.571 Firmware Slot 1 Read-Only: N/A 00:25:10.571 Firmware Activation Without Reset: N/A 00:25:10.571 Multiple Update Detection Support: N/A 00:25:10.571 Firmware Update Granularity: No Information Provided 00:25:10.571 Per-Namespace SMART Log: No 00:25:10.571 Asymmetric Namespace Access Log Page: Not Supported 00:25:10.571 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:10.571 Command Effects Log Page: Not Supported 00:25:10.571 Get Log Page Extended Data: Supported 00:25:10.571 Telemetry Log Pages: Not Supported 00:25:10.571 Persistent Event Log Pages: Not Supported 00:25:10.571 Supported Log Pages Log Page: May Support 00:25:10.571 Commands Supported & Effects Log Page: Not Supported 00:25:10.571 Feature Identifiers & Effects Log Page:May Support 00:25:10.571 NVMe-MI Commands & Effects Log Page: May Support 00:25:10.571 Data Area 4 for Telemetry Log: Not Supported 00:25:10.571 Error Log Page Entries Supported: 128 00:25:10.571 Keep Alive: Not Supported 00:25:10.571 00:25:10.571 NVM Command Set Attributes 00:25:10.571 ========================== 00:25:10.571 Submission Queue Entry Size 00:25:10.571 Max: 1 00:25:10.571 Min: 1 00:25:10.571 Completion Queue Entry Size 00:25:10.571 Max: 1 00:25:10.571 Min: 1 00:25:10.571 Number of Namespaces: 0 00:25:10.571 Compare Command: Not Supported 00:25:10.571 Write Uncorrectable Command: Not Supported 00:25:10.571 Dataset Management Command: Not Supported 00:25:10.572 Write Zeroes Command: Not Supported 00:25:10.572 Set Features Save Field: Not Supported 00:25:10.572 Reservations: Not Supported 00:25:10.572 Timestamp: Not Supported 00:25:10.572 Copy: Not Supported 00:25:10.572 Volatile Write Cache: Not Present 00:25:10.572 Atomic Write Unit (Normal): 1 00:25:10.572 Atomic Write Unit (PFail): 1 00:25:10.572 Atomic Compare & Write Unit: 1 00:25:10.572 Fused Compare & Write: Supported 00:25:10.572 Scatter-Gather List 00:25:10.572 SGL Command Set: Supported 00:25:10.572 SGL Keyed: Supported 00:25:10.572 SGL Bit Bucket Descriptor: Not Supported 00:25:10.572 SGL Metadata Pointer: Not Supported 00:25:10.572 Oversized SGL: Not Supported 00:25:10.572 SGL Metadata Address: Not Supported 00:25:10.572 SGL Offset: Supported 00:25:10.572 Transport SGL Data Block: Not Supported 00:25:10.572 Replay Protected Memory Block: Not Supported 00:25:10.572 00:25:10.572 Firmware Slot Information 00:25:10.572 ========================= 00:25:10.572 Active slot: 0 00:25:10.572 00:25:10.572 00:25:10.572 Error Log 00:25:10.572 ========= 00:25:10.572 00:25:10.572 Active Namespaces 00:25:10.572 ================= 00:25:10.572 Discovery Log Page 00:25:10.572 ================== 00:25:10.572 Generation Counter: 2 00:25:10.572 Number of Records: 2 00:25:10.572 Record Format: 0 00:25:10.572 00:25:10.572 Discovery Log Entry 0 00:25:10.572 ---------------------- 00:25:10.572 Transport Type: 3 (TCP) 00:25:10.572 Address Family: 1 (IPv4) 00:25:10.572 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:10.572 Entry Flags: 00:25:10.572 Duplicate Returned Information: 1 00:25:10.572 Explicit Persistent Connection Support for Discovery: 1 00:25:10.572 Transport Requirements: 00:25:10.572 Secure Channel: Not Required 00:25:10.572 Port ID: 0 (0x0000) 00:25:10.572 Controller ID: 65535 (0xffff) 00:25:10.572 Admin Max SQ Size: 128 00:25:10.572 Transport Service Identifier: 4420 00:25:10.572 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:10.572 Transport Address: 10.0.0.2 00:25:10.572 Discovery Log Entry 1 00:25:10.572 ---------------------- 00:25:10.572 Transport Type: 3 (TCP) 00:25:10.572 Address Family: 1 (IPv4) 00:25:10.572 Subsystem Type: 2 (NVM Subsystem) 00:25:10.572 Entry Flags: 00:25:10.572 Duplicate Returned Information: 0 00:25:10.572 Explicit Persistent Connection Support for Discovery: 0 00:25:10.572 Transport Requirements: 00:25:10.572 Secure Channel: Not Required 00:25:10.572 Port ID: 0 (0x0000) 00:25:10.572 Controller ID: 65535 (0xffff) 00:25:10.572 Admin Max SQ Size: 128 00:25:10.572 Transport Service Identifier: 4420 00:25:10.572 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:10.572 Transport Address: 10.0.0.2 [2024-12-10 05:50:28.258338] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:10.572 [2024-12-10 05:50:28.258349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72100) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.572 [2024-12-10 05:50:28.258360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72280) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.572 [2024-12-10 05:50:28.258369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72400) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.572 [2024-12-10 05:50:28.258377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.572 [2024-12-10 05:50:28.258388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.258402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.258416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.258491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.572 [2024-12-10 05:50:28.258497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.572 [2024-12-10 05:50:28.258500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.258522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.258535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.258616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.572 [2024-12-10 05:50:28.258622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.572 [2024-12-10 05:50:28.258625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258632] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:10.572 [2024-12-10 05:50:28.258636] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:10.572 [2024-12-10 05:50:28.258644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.258658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.258668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.258726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.572 [2024-12-10 05:50:28.258732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.572 [2024-12-10 05:50:28.258735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.258759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.258768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.258825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.572 [2024-12-10 05:50:28.258831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.572 [2024-12-10 05:50:28.258834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.258858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.258866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.258923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.572 [2024-12-10 05:50:28.258929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.572 [2024-12-10 05:50:28.258932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.258944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.258950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.258956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.258965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.259025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.572 [2024-12-10 05:50:28.259031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.572 [2024-12-10 05:50:28.259034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.259037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.572 [2024-12-10 05:50:28.259045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.259048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.572 [2024-12-10 05:50:28.259053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.572 [2024-12-10 05:50:28.259059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.572 [2024-12-10 05:50:28.259069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.572 [2024-12-10 05:50:28.259123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.259942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.259948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.259951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.259962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.259968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.259974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.259983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.260051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.260057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.260060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.260071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.260083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.260092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.260161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.260166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.260169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.260181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.260193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.260202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.260269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.573 [2024-12-10 05:50:28.260276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.573 [2024-12-10 05:50:28.260279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.573 [2024-12-10 05:50:28.260290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.573 [2024-12-10 05:50:28.260297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.573 [2024-12-10 05:50:28.260302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.573 [2024-12-10 05:50:28.260313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.573 [2024-12-10 05:50:28.260369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.260375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.260378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.260391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.260404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.260414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.260479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.260485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.260488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.260499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.260512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.260523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.260588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.260594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.260597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.260608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.260620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.260629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.260685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.260691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.260694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.260705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.260717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.260727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.260786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.260792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.260795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.260806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.260818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.260827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.260882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.260888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.260891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.260902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.260909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.260914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.260923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.260993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.261014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.261026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.261035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.261103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.261125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.261137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.261147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.261204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.261234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.261246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.261256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.261320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.261339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.261351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.261360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.261427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.261447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.261459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.261468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.261526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.574 [2024-12-10 05:50:28.261546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.574 [2024-12-10 05:50:28.261552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.574 [2024-12-10 05:50:28.261558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.574 [2024-12-10 05:50:28.261567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.574 [2024-12-10 05:50:28.261624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.574 [2024-12-10 05:50:28.261631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.574 [2024-12-10 05:50:28.261635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.261648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.261660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.261669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.261735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.261741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.261744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.261755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.261767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.261777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.261842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.261848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.261851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.261862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.261874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.261884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.261942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.261947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.261950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.261961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.261968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.261973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.261983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.262039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.262045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.262048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.262051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.262060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.262064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.262067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.262072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.262082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.262150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.262155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.262158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.262162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.262170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.262173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.262176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.262182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.262191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.266227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.266235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.266238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.266241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.266250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.266254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.266257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e10690) 00:25:10.575 [2024-12-10 05:50:28.266263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.575 [2024-12-10 05:50:28.266274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e72580, cid 3, qid 0 00:25:10.575 [2024-12-10 05:50:28.266341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.266347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.266350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.266353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e72580) on tqpair=0x1e10690 00:25:10.575 [2024-12-10 05:50:28.266359] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:25:10.575 00:25:10.575 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:10.575 [2024-12-10 05:50:28.302992] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:25:10.575 [2024-12-10 05:50:28.303031] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223168 ] 00:25:10.575 [2024-12-10 05:50:28.340179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:10.575 [2024-12-10 05:50:28.344224] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:10.575 [2024-12-10 05:50:28.344230] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:10.575 [2024-12-10 05:50:28.344241] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:10.575 [2024-12-10 05:50:28.344256] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:10.575 [2024-12-10 05:50:28.344530] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:10.575 [2024-12-10 05:50:28.344556] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14e2690 0 00:25:10.575 [2024-12-10 05:50:28.359236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:10.575 [2024-12-10 05:50:28.359251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:10.575 [2024-12-10 05:50:28.359255] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:10.575 [2024-12-10 05:50:28.359258] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:10.575 [2024-12-10 05:50:28.359282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.359287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.359290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.575 [2024-12-10 05:50:28.359300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:10.575 [2024-12-10 05:50:28.359315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.575 [2024-12-10 05:50:28.367230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.575 [2024-12-10 05:50:28.367238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.575 [2024-12-10 05:50:28.367242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.575 [2024-12-10 05:50:28.367246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.575 [2024-12-10 05:50:28.367257] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:10.576 [2024-12-10 05:50:28.367263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:10.576 [2024-12-10 05:50:28.367267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:10.576 [2024-12-10 05:50:28.367277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.367291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.367304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.367389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.367394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.367398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.367405] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:10.576 [2024-12-10 05:50:28.367412] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:10.576 [2024-12-10 05:50:28.367419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.367434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.367444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.367504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.367510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.367513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.367520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:10.576 [2024-12-10 05:50:28.367527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:10.576 [2024-12-10 05:50:28.367533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.367545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.367554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.367620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.367626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.367628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.367636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:10.576 [2024-12-10 05:50:28.367644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.367656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.367665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.367730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.367736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.367739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.367746] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:10.576 [2024-12-10 05:50:28.367750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:10.576 [2024-12-10 05:50:28.367757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:10.576 [2024-12-10 05:50:28.367864] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:10.576 [2024-12-10 05:50:28.367870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:10.576 [2024-12-10 05:50:28.367876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.367888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.367897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.367958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.367963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.367966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.367973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:10.576 [2024-12-10 05:50:28.367982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.367988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.367994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.368003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.368068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.368073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.368077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.368083] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:10.576 [2024-12-10 05:50:28.368087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:10.576 [2024-12-10 05:50:28.368094] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:10.576 [2024-12-10 05:50:28.368101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:10.576 [2024-12-10 05:50:28.368108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.368116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.576 [2024-12-10 05:50:28.368126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.368224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.576 [2024-12-10 05:50:28.368230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.576 [2024-12-10 05:50:28.368233] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=4096, cccid=0 00:25:10.576 [2024-12-10 05:50:28.368241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544100) on tqpair(0x14e2690): expected_datao=0, payload_size=4096 00:25:10.576 [2024-12-10 05:50:28.368246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368253] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368256] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.368269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.368272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.576 [2024-12-10 05:50:28.368281] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:10.576 [2024-12-10 05:50:28.368286] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:10.576 [2024-12-10 05:50:28.368289] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:10.576 [2024-12-10 05:50:28.368293] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:10.576 [2024-12-10 05:50:28.368297] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:10.576 [2024-12-10 05:50:28.368301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:10.576 [2024-12-10 05:50:28.368311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:10.576 [2024-12-10 05:50:28.368318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.576 [2024-12-10 05:50:28.368325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.576 [2024-12-10 05:50:28.368331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:10.576 [2024-12-10 05:50:28.368341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.576 [2024-12-10 05:50:28.368403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.576 [2024-12-10 05:50:28.368408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.576 [2024-12-10 05:50:28.368411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.577 [2024-12-10 05:50:28.368420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.577 [2024-12-10 05:50:28.368436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.577 [2024-12-10 05:50:28.368452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.577 [2024-12-10 05:50:28.368470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.577 [2024-12-10 05:50:28.368485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.577 [2024-12-10 05:50:28.368522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544100, cid 0, qid 0 00:25:10.577 [2024-12-10 05:50:28.368526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544280, cid 1, qid 0 00:25:10.577 [2024-12-10 05:50:28.368530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544400, cid 2, qid 0 00:25:10.577 [2024-12-10 05:50:28.368534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.577 [2024-12-10 05:50:28.368538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.577 [2024-12-10 05:50:28.368632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.577 [2024-12-10 05:50:28.368638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.577 [2024-12-10 05:50:28.368641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.577 [2024-12-10 05:50:28.368648] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:10.577 [2024-12-10 05:50:28.368652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:10.577 [2024-12-10 05:50:28.368693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.577 [2024-12-10 05:50:28.368760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.577 [2024-12-10 05:50:28.368765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.577 [2024-12-10 05:50:28.368768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.577 [2024-12-10 05:50:28.368819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.368836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.368845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.577 [2024-12-10 05:50:28.368855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.577 [2024-12-10 05:50:28.368926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.577 [2024-12-10 05:50:28.368932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.577 [2024-12-10 05:50:28.368935] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=4096, cccid=4 00:25:10.577 [2024-12-10 05:50:28.368942] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544700) on tqpair(0x14e2690): expected_datao=0, payload_size=4096 00:25:10.577 [2024-12-10 05:50:28.368946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368956] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368960] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.368995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.577 [2024-12-10 05:50:28.369000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.577 [2024-12-10 05:50:28.369003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.577 [2024-12-10 05:50:28.369016] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:10.577 [2024-12-10 05:50:28.369027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.369036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.369042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.369050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.577 [2024-12-10 05:50:28.369061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.577 [2024-12-10 05:50:28.369140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.577 [2024-12-10 05:50:28.369145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.577 [2024-12-10 05:50:28.369148] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369151] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=4096, cccid=4 00:25:10.577 [2024-12-10 05:50:28.369155] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544700) on tqpair(0x14e2690): expected_datao=0, payload_size=4096 00:25:10.577 [2024-12-10 05:50:28.369158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369169] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369172] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.577 [2024-12-10 05:50:28.369201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.577 [2024-12-10 05:50:28.369205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.577 [2024-12-10 05:50:28.369225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.369233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.369240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.577 [2024-12-10 05:50:28.369248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.577 [2024-12-10 05:50:28.369258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.577 [2024-12-10 05:50:28.369328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.577 [2024-12-10 05:50:28.369334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.577 [2024-12-10 05:50:28.369337] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369341] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=4096, cccid=4 00:25:10.577 [2024-12-10 05:50:28.369344] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544700) on tqpair(0x14e2690): expected_datao=0, payload_size=4096 00:25:10.577 [2024-12-10 05:50:28.369348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369358] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369362] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.577 [2024-12-10 05:50:28.369385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.577 [2024-12-10 05:50:28.369388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.577 [2024-12-10 05:50:28.369391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.577 [2024-12-10 05:50:28.369398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:10.577 [2024-12-10 05:50:28.369404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:10.578 [2024-12-10 05:50:28.369412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:10.578 [2024-12-10 05:50:28.369417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:10.578 [2024-12-10 05:50:28.369421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:10.578 [2024-12-10 05:50:28.369426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:10.578 [2024-12-10 05:50:28.369430] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:10.578 [2024-12-10 05:50:28.369434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:10.578 [2024-12-10 05:50:28.369438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:10.578 [2024-12-10 05:50:28.369450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.578 [2024-12-10 05:50:28.369490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.578 [2024-12-10 05:50:28.369495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544880, cid 5, qid 0 00:25:10.578 [2024-12-10 05:50:28.369576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.369581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.369584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.369593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.369597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.369600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544880) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.369612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544880, cid 5, qid 0 00:25:10.578 [2024-12-10 05:50:28.369694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.369700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.369703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544880) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.369714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544880, cid 5, qid 0 00:25:10.578 [2024-12-10 05:50:28.369791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.369797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.369799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544880) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.369810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544880, cid 5, qid 0 00:25:10.578 [2024-12-10 05:50:28.369888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.369894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.369897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544880) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.369912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.369959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14e2690) 00:25:10.578 [2024-12-10 05:50:28.369964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.578 [2024-12-10 05:50:28.369974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544880, cid 5, qid 0 00:25:10.578 [2024-12-10 05:50:28.369979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544700, cid 4, qid 0 00:25:10.578 [2024-12-10 05:50:28.369983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544a00, cid 6, qid 0 00:25:10.578 [2024-12-10 05:50:28.369987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544b80, cid 7, qid 0 00:25:10.578 [2024-12-10 05:50:28.370123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.578 [2024-12-10 05:50:28.370129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.578 [2024-12-10 05:50:28.370132] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370135] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=8192, cccid=5 00:25:10.578 [2024-12-10 05:50:28.370138] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544880) on tqpair(0x14e2690): expected_datao=0, payload_size=8192 00:25:10.578 [2024-12-10 05:50:28.370142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370152] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370156] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.578 [2024-12-10 05:50:28.370170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.578 [2024-12-10 05:50:28.370172] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370175] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=512, cccid=4 00:25:10.578 [2024-12-10 05:50:28.370179] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544700) on tqpair(0x14e2690): expected_datao=0, payload_size=512 00:25:10.578 [2024-12-10 05:50:28.370183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370190] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370193] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.578 [2024-12-10 05:50:28.370203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.578 [2024-12-10 05:50:28.370206] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370209] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=512, cccid=6 00:25:10.578 [2024-12-10 05:50:28.370212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544a00) on tqpair(0x14e2690): expected_datao=0, payload_size=512 00:25:10.578 [2024-12-10 05:50:28.370216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370228] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370231] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.578 [2024-12-10 05:50:28.370240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.578 [2024-12-10 05:50:28.370243] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370246] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2690): datao=0, datal=4096, cccid=7 00:25:10.578 [2024-12-10 05:50:28.370250] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544b80) on tqpair(0x14e2690): expected_datao=0, payload_size=4096 00:25:10.578 [2024-12-10 05:50:28.370253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370262] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.370274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.370277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544880) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.370291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.370296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.578 [2024-12-10 05:50:28.370299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.578 [2024-12-10 05:50:28.370302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544700) on tqpair=0x14e2690 00:25:10.578 [2024-12-10 05:50:28.370310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.578 [2024-12-10 05:50:28.370315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.579 [2024-12-10 05:50:28.370318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.579 [2024-12-10 05:50:28.370321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544a00) on tqpair=0x14e2690 00:25:10.579 [2024-12-10 05:50:28.370327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.579 [2024-12-10 05:50:28.370332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.579 [2024-12-10 05:50:28.370335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.579 [2024-12-10 05:50:28.370338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544b80) on tqpair=0x14e2690 00:25:10.579 ===================================================== 00:25:10.579 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.579 ===================================================== 00:25:10.579 Controller Capabilities/Features 00:25:10.579 ================================ 00:25:10.579 Vendor ID: 8086 00:25:10.579 Subsystem Vendor ID: 8086 00:25:10.579 Serial Number: SPDK00000000000001 00:25:10.579 Model Number: SPDK bdev Controller 00:25:10.579 Firmware Version: 25.01 00:25:10.579 Recommended Arb Burst: 6 00:25:10.579 IEEE OUI Identifier: e4 d2 5c 00:25:10.579 Multi-path I/O 00:25:10.579 May have multiple subsystem ports: Yes 00:25:10.579 May have multiple controllers: Yes 00:25:10.579 Associated with SR-IOV VF: No 00:25:10.579 Max Data Transfer Size: 131072 00:25:10.579 Max Number of Namespaces: 32 00:25:10.579 Max Number of I/O Queues: 127 00:25:10.579 NVMe Specification Version (VS): 1.3 00:25:10.579 NVMe Specification Version (Identify): 1.3 00:25:10.579 Maximum Queue Entries: 128 00:25:10.579 Contiguous Queues Required: Yes 00:25:10.579 Arbitration Mechanisms Supported 00:25:10.579 Weighted Round Robin: Not Supported 00:25:10.579 Vendor Specific: Not Supported 00:25:10.579 Reset Timeout: 15000 ms 00:25:10.579 Doorbell Stride: 4 bytes 00:25:10.579 NVM Subsystem Reset: Not Supported 00:25:10.579 Command Sets Supported 00:25:10.579 NVM Command Set: Supported 00:25:10.579 Boot Partition: Not Supported 00:25:10.579 Memory Page Size Minimum: 4096 bytes 00:25:10.579 Memory Page Size Maximum: 4096 bytes 00:25:10.579 Persistent Memory Region: Not Supported 00:25:10.579 Optional Asynchronous Events Supported 00:25:10.579 Namespace Attribute Notices: Supported 00:25:10.579 Firmware Activation Notices: Not Supported 00:25:10.579 ANA Change Notices: Not Supported 00:25:10.579 PLE Aggregate Log Change Notices: Not Supported 00:25:10.579 LBA Status Info Alert Notices: Not Supported 00:25:10.579 EGE Aggregate Log Change Notices: Not Supported 00:25:10.579 Normal NVM Subsystem Shutdown event: Not Supported 00:25:10.579 Zone Descriptor Change Notices: Not Supported 00:25:10.579 Discovery Log Change Notices: Not Supported 00:25:10.579 Controller Attributes 00:25:10.579 128-bit Host Identifier: Supported 00:25:10.579 Non-Operational Permissive Mode: Not Supported 00:25:10.579 NVM Sets: Not Supported 00:25:10.579 Read Recovery Levels: Not Supported 00:25:10.579 Endurance Groups: Not Supported 00:25:10.579 Predictable Latency Mode: Not Supported 00:25:10.579 Traffic Based Keep ALive: Not Supported 00:25:10.579 Namespace Granularity: Not Supported 00:25:10.579 SQ Associations: Not Supported 00:25:10.579 UUID List: Not Supported 00:25:10.579 Multi-Domain Subsystem: Not Supported 00:25:10.579 Fixed Capacity Management: Not Supported 00:25:10.579 Variable Capacity Management: Not Supported 00:25:10.579 Delete Endurance Group: Not Supported 00:25:10.579 Delete NVM Set: Not Supported 00:25:10.579 Extended LBA Formats Supported: Not Supported 00:25:10.579 Flexible Data Placement Supported: Not Supported 00:25:10.579 00:25:10.579 Controller Memory Buffer Support 00:25:10.579 ================================ 00:25:10.579 Supported: No 00:25:10.579 00:25:10.579 Persistent Memory Region Support 00:25:10.579 ================================ 00:25:10.579 Supported: No 00:25:10.579 00:25:10.579 Admin Command Set Attributes 00:25:10.579 ============================ 00:25:10.579 Security Send/Receive: Not Supported 00:25:10.579 Format NVM: Not Supported 00:25:10.579 Firmware Activate/Download: Not Supported 00:25:10.579 Namespace Management: Not Supported 00:25:10.579 Device Self-Test: Not Supported 00:25:10.579 Directives: Not Supported 00:25:10.579 NVMe-MI: Not Supported 00:25:10.579 Virtualization Management: Not Supported 00:25:10.579 Doorbell Buffer Config: Not Supported 00:25:10.579 Get LBA Status Capability: Not Supported 00:25:10.579 Command & Feature Lockdown Capability: Not Supported 00:25:10.579 Abort Command Limit: 4 00:25:10.579 Async Event Request Limit: 4 00:25:10.579 Number of Firmware Slots: N/A 00:25:10.579 Firmware Slot 1 Read-Only: N/A 00:25:10.579 Firmware Activation Without Reset: N/A 00:25:10.579 Multiple Update Detection Support: N/A 00:25:10.579 Firmware Update Granularity: No Information Provided 00:25:10.579 Per-Namespace SMART Log: No 00:25:10.579 Asymmetric Namespace Access Log Page: Not Supported 00:25:10.579 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:10.579 Command Effects Log Page: Supported 00:25:10.579 Get Log Page Extended Data: Supported 00:25:10.579 Telemetry Log Pages: Not Supported 00:25:10.579 Persistent Event Log Pages: Not Supported 00:25:10.579 Supported Log Pages Log Page: May Support 00:25:10.579 Commands Supported & Effects Log Page: Not Supported 00:25:10.579 Feature Identifiers & Effects Log Page:May Support 00:25:10.579 NVMe-MI Commands & Effects Log Page: May Support 00:25:10.579 Data Area 4 for Telemetry Log: Not Supported 00:25:10.579 Error Log Page Entries Supported: 128 00:25:10.579 Keep Alive: Supported 00:25:10.579 Keep Alive Granularity: 10000 ms 00:25:10.579 00:25:10.579 NVM Command Set Attributes 00:25:10.579 ========================== 00:25:10.579 Submission Queue Entry Size 00:25:10.579 Max: 64 00:25:10.579 Min: 64 00:25:10.579 Completion Queue Entry Size 00:25:10.579 Max: 16 00:25:10.579 Min: 16 00:25:10.579 Number of Namespaces: 32 00:25:10.579 Compare Command: Supported 00:25:10.579 Write Uncorrectable Command: Not Supported 00:25:10.579 Dataset Management Command: Supported 00:25:10.579 Write Zeroes Command: Supported 00:25:10.579 Set Features Save Field: Not Supported 00:25:10.579 Reservations: Supported 00:25:10.579 Timestamp: Not Supported 00:25:10.579 Copy: Supported 00:25:10.579 Volatile Write Cache: Present 00:25:10.579 Atomic Write Unit (Normal): 1 00:25:10.579 Atomic Write Unit (PFail): 1 00:25:10.579 Atomic Compare & Write Unit: 1 00:25:10.579 Fused Compare & Write: Supported 00:25:10.579 Scatter-Gather List 00:25:10.579 SGL Command Set: Supported 00:25:10.579 SGL Keyed: Supported 00:25:10.579 SGL Bit Bucket Descriptor: Not Supported 00:25:10.579 SGL Metadata Pointer: Not Supported 00:25:10.579 Oversized SGL: Not Supported 00:25:10.579 SGL Metadata Address: Not Supported 00:25:10.579 SGL Offset: Supported 00:25:10.579 Transport SGL Data Block: Not Supported 00:25:10.579 Replay Protected Memory Block: Not Supported 00:25:10.579 00:25:10.579 Firmware Slot Information 00:25:10.579 ========================= 00:25:10.579 Active slot: 1 00:25:10.579 Slot 1 Firmware Revision: 25.01 00:25:10.579 00:25:10.579 00:25:10.579 Commands Supported and Effects 00:25:10.579 ============================== 00:25:10.579 Admin Commands 00:25:10.579 -------------- 00:25:10.579 Get Log Page (02h): Supported 00:25:10.579 Identify (06h): Supported 00:25:10.579 Abort (08h): Supported 00:25:10.579 Set Features (09h): Supported 00:25:10.579 Get Features (0Ah): Supported 00:25:10.579 Asynchronous Event Request (0Ch): Supported 00:25:10.579 Keep Alive (18h): Supported 00:25:10.579 I/O Commands 00:25:10.579 ------------ 00:25:10.579 Flush (00h): Supported LBA-Change 00:25:10.579 Write (01h): Supported LBA-Change 00:25:10.579 Read (02h): Supported 00:25:10.579 Compare (05h): Supported 00:25:10.579 Write Zeroes (08h): Supported LBA-Change 00:25:10.579 Dataset Management (09h): Supported LBA-Change 00:25:10.579 Copy (19h): Supported LBA-Change 00:25:10.579 00:25:10.579 Error Log 00:25:10.579 ========= 00:25:10.579 00:25:10.579 Arbitration 00:25:10.579 =========== 00:25:10.579 Arbitration Burst: 1 00:25:10.579 00:25:10.579 Power Management 00:25:10.579 ================ 00:25:10.579 Number of Power States: 1 00:25:10.579 Current Power State: Power State #0 00:25:10.579 Power State #0: 00:25:10.579 Max Power: 0.00 W 00:25:10.579 Non-Operational State: Operational 00:25:10.579 Entry Latency: Not Reported 00:25:10.579 Exit Latency: Not Reported 00:25:10.579 Relative Read Throughput: 0 00:25:10.579 Relative Read Latency: 0 00:25:10.579 Relative Write Throughput: 0 00:25:10.579 Relative Write Latency: 0 00:25:10.579 Idle Power: Not Reported 00:25:10.579 Active Power: Not Reported 00:25:10.579 Non-Operational Permissive Mode: Not Supported 00:25:10.579 00:25:10.579 Health Information 00:25:10.579 ================== 00:25:10.579 Critical Warnings: 00:25:10.579 Available Spare Space: OK 00:25:10.579 Temperature: OK 00:25:10.579 Device Reliability: OK 00:25:10.579 Read Only: No 00:25:10.579 Volatile Memory Backup: OK 00:25:10.580 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:10.580 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:10.580 Available Spare: 0% 00:25:10.580 Available Spare Threshold: 0% 00:25:10.580 Life Percentage Used:[2024-12-10 05:50:28.370415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.370425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.370436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544b80, cid 7, qid 0 00:25:10.580 [2024-12-10 05:50:28.370510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.370516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.370519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544b80) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370548] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:10.580 [2024-12-10 05:50:28.370557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544100) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.580 [2024-12-10 05:50:28.370566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544280) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.580 [2024-12-10 05:50:28.370574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544400) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.580 [2024-12-10 05:50:28.370582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.580 [2024-12-10 05:50:28.370593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.370605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.370616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.370680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.370686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.370688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.370709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.370720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.370790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.370796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.370799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370806] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:10.580 [2024-12-10 05:50:28.370810] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:10.580 [2024-12-10 05:50:28.370817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.370831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.370840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.370901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.370907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.370909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.370921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.370927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.370932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.370942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.371002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.371008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.371011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.371021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.371033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.371042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.371099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.371105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.371108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.371118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.371130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.371139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.371198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.371204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.371207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.371210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.375227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.375234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.375237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2690) 00:25:10.580 [2024-12-10 05:50:28.375245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.580 [2024-12-10 05:50:28.375255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544580, cid 3, qid 0 00:25:10.580 [2024-12-10 05:50:28.375411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.580 [2024-12-10 05:50:28.375417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.580 [2024-12-10 05:50:28.375420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.580 [2024-12-10 05:50:28.375423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544580) on tqpair=0x14e2690 00:25:10.580 [2024-12-10 05:50:28.375430] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:10.580 0% 00:25:10.580 Data Units Read: 0 00:25:10.580 Data Units Written: 0 00:25:10.580 Host Read Commands: 0 00:25:10.580 Host Write Commands: 0 00:25:10.580 Controller Busy Time: 0 minutes 00:25:10.580 Power Cycles: 0 00:25:10.580 Power On Hours: 0 hours 00:25:10.580 Unsafe Shutdowns: 0 00:25:10.580 Unrecoverable Media Errors: 0 00:25:10.580 Lifetime Error Log Entries: 0 00:25:10.580 Warning Temperature Time: 0 minutes 00:25:10.580 Critical Temperature Time: 0 minutes 00:25:10.580 00:25:10.580 Number of Queues 00:25:10.580 ================ 00:25:10.580 Number of I/O Submission Queues: 127 00:25:10.580 Number of I/O Completion Queues: 127 00:25:10.580 00:25:10.580 Active Namespaces 00:25:10.580 ================= 00:25:10.580 Namespace ID:1 00:25:10.580 Error Recovery Timeout: Unlimited 00:25:10.580 Command Set Identifier: NVM (00h) 00:25:10.580 Deallocate: Supported 00:25:10.581 Deallocated/Unwritten Error: Not Supported 00:25:10.581 Deallocated Read Value: Unknown 00:25:10.581 Deallocate in Write Zeroes: Not Supported 00:25:10.581 Deallocated Guard Field: 0xFFFF 00:25:10.581 Flush: Supported 00:25:10.581 Reservation: Supported 00:25:10.581 Namespace Sharing Capabilities: Multiple Controllers 00:25:10.581 Size (in LBAs): 131072 (0GiB) 00:25:10.581 Capacity (in LBAs): 131072 (0GiB) 00:25:10.581 Utilization (in LBAs): 131072 (0GiB) 00:25:10.581 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:10.581 EUI64: ABCDEF0123456789 00:25:10.581 UUID: 12143d34-af81-4e63-ad36-dcbd0e6bb3b1 00:25:10.581 Thin Provisioning: Not Supported 00:25:10.581 Per-NS Atomic Units: Yes 00:25:10.581 Atomic Boundary Size (Normal): 0 00:25:10.581 Atomic Boundary Size (PFail): 0 00:25:10.581 Atomic Boundary Offset: 0 00:25:10.581 Maximum Single Source Range Length: 65535 00:25:10.581 Maximum Copy Length: 65535 00:25:10.581 Maximum Source Range Count: 1 00:25:10.581 NGUID/EUI64 Never Reused: No 00:25:10.581 Namespace Write Protected: No 00:25:10.581 Number of LBA Formats: 1 00:25:10.581 Current LBA Format: LBA Format #00 00:25:10.581 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:10.581 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.581 rmmod nvme_tcp 00:25:10.581 rmmod nvme_fabrics 00:25:10.581 rmmod nvme_keyring 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 223020 ']' 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 223020 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 223020 ']' 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 223020 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.581 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223020 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223020' 00:25:10.839 killing process with pid 223020 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 223020 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 223020 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.839 05:50:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.376 00:25:13.376 real 0m10.697s 00:25:13.376 user 0m8.017s 00:25:13.376 sys 0m5.468s 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:13.376 ************************************ 00:25:13.376 END TEST nvmf_identify 00:25:13.376 ************************************ 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.376 ************************************ 00:25:13.376 START TEST nvmf_perf 00:25:13.376 ************************************ 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:13.376 * Looking for test storage... 00:25:13.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:13.376 05:50:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.376 --rc genhtml_branch_coverage=1 00:25:13.376 --rc genhtml_function_coverage=1 00:25:13.376 --rc genhtml_legend=1 00:25:13.376 --rc geninfo_all_blocks=1 00:25:13.376 --rc geninfo_unexecuted_blocks=1 00:25:13.376 00:25:13.376 ' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.376 --rc genhtml_branch_coverage=1 00:25:13.376 --rc genhtml_function_coverage=1 00:25:13.376 --rc genhtml_legend=1 00:25:13.376 --rc geninfo_all_blocks=1 00:25:13.376 --rc geninfo_unexecuted_blocks=1 00:25:13.376 00:25:13.376 ' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.376 --rc genhtml_branch_coverage=1 00:25:13.376 --rc genhtml_function_coverage=1 00:25:13.376 --rc genhtml_legend=1 00:25:13.376 --rc geninfo_all_blocks=1 00:25:13.376 --rc geninfo_unexecuted_blocks=1 00:25:13.376 00:25:13.376 ' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.376 --rc genhtml_branch_coverage=1 00:25:13.376 --rc genhtml_function_coverage=1 00:25:13.376 --rc genhtml_legend=1 00:25:13.376 --rc geninfo_all_blocks=1 00:25:13.376 --rc geninfo_unexecuted_blocks=1 00:25:13.376 00:25:13.376 ' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.376 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.377 05:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.945 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:19.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:19.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:19.946 Found net devices under 0000:af:00.0: cvl_0_0 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:19.946 Found net devices under 0000:af:00.1: cvl_0_1 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:25:19.946 00:25:19.946 --- 10.0.0.2 ping statistics --- 00:25:19.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.946 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:25:19.946 00:25:19.946 --- 10.0.0.1 ping statistics --- 00:25:19.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.946 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=227116 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 227116 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 227116 ']' 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.946 05:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:19.946 [2024-12-10 05:50:37.845652] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:25:19.946 [2024-12-10 05:50:37.845696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.205 [2024-12-10 05:50:37.930759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.205 [2024-12-10 05:50:37.971364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.205 [2024-12-10 05:50:37.971402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.205 [2024-12-10 05:50:37.971409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.205 [2024-12-10 05:50:37.971415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.205 [2024-12-10 05:50:37.971423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.205 [2024-12-10 05:50:37.972805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.205 [2024-12-10 05:50:37.972917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.205 [2024-12-10 05:50:37.973022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.205 [2024-12-10 05:50:37.973024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.770 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.770 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:20.770 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.770 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.770 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.028 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.028 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:21.028 05:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:24.308 05:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:24.308 05:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:24.308 05:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:24.308 05:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:24.308 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:24.308 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:24.308 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:24.308 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:24.308 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:24.565 [2024-12-10 05:50:42.353155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.566 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.823 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:24.823 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.081 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.081 05:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:25.081 05:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.338 [2024-12-10 05:50:43.173524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.338 05:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:25.596 05:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:25.596 05:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:25.596 05:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:25.596 05:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:26.969 Initializing NVMe Controllers 00:25:26.969 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:26.969 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:26.969 Initialization complete. Launching workers. 00:25:26.969 ======================================================== 00:25:26.969 Latency(us) 00:25:26.969 Device Information : IOPS MiB/s Average min max 00:25:26.969 PCIE (0000:5e:00.0) NSID 1 from core 0: 98532.84 384.89 324.10 39.51 4870.10 00:25:26.969 ======================================================== 00:25:26.969 Total : 98532.84 384.89 324.10 39.51 4870.10 00:25:26.969 00:25:26.969 05:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.341 Initializing NVMe Controllers 00:25:28.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:28.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:28.341 Initialization complete. Launching workers. 00:25:28.341 ======================================================== 00:25:28.341 Latency(us) 00:25:28.341 Device Information : IOPS MiB/s Average min max 00:25:28.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 111.94 0.44 9140.28 106.54 45513.86 00:25:28.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 69.96 0.27 14852.30 6987.91 47884.01 00:25:28.341 ======================================================== 00:25:28.341 Total : 181.90 0.71 11337.21 106.54 47884.01 00:25:28.341 00:25:28.341 05:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:29.713 Initializing NVMe Controllers 00:25:29.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.713 Initialization complete. Launching workers. 00:25:29.713 ======================================================== 00:25:29.713 Latency(us) 00:25:29.713 Device Information : IOPS MiB/s Average min max 00:25:29.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11255.15 43.97 2843.12 468.97 9110.06 00:25:29.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3788.35 14.80 8472.81 5006.33 16493.22 00:25:29.713 ======================================================== 00:25:29.713 Total : 15043.50 58.76 4260.83 468.97 16493.22 00:25:29.713 00:25:29.713 05:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:29.713 05:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:29.713 05:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.240 Initializing NVMe Controllers 00:25:32.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.240 Controller IO queue size 128, less than required. 00:25:32.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.240 Controller IO queue size 128, less than required. 00:25:32.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.240 Initialization complete. Launching workers. 00:25:32.240 ======================================================== 00:25:32.240 Latency(us) 00:25:32.240 Device Information : IOPS MiB/s Average min max 00:25:32.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1810.43 452.61 71987.33 41894.29 114005.20 00:25:32.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.98 149.99 221637.26 71811.44 331814.53 00:25:32.240 ======================================================== 00:25:32.240 Total : 2410.40 602.60 109236.85 41894.29 331814.53 00:25:32.240 00:25:32.240 05:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:32.240 No valid NVMe controllers or AIO or URING devices found 00:25:32.240 Initializing NVMe Controllers 00:25:32.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.240 Controller IO queue size 128, less than required. 00:25:32.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:32.240 Controller IO queue size 128, less than required. 00:25:32.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:32.240 WARNING: Some requested NVMe devices were skipped 00:25:32.240 05:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:34.765 Initializing NVMe Controllers 00:25:34.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.765 Controller IO queue size 128, less than required. 00:25:34.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.765 Controller IO queue size 128, less than required. 00:25:34.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.765 Initialization complete. Launching workers. 00:25:34.765 00:25:34.765 ==================== 00:25:34.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:34.765 TCP transport: 00:25:34.765 polls: 15225 00:25:34.765 idle_polls: 11724 00:25:34.765 sock_completions: 3501 00:25:34.765 nvme_completions: 6359 00:25:34.765 submitted_requests: 9644 00:25:34.765 queued_requests: 1 00:25:34.765 00:25:34.765 ==================== 00:25:34.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:34.765 TCP transport: 00:25:34.765 polls: 11407 00:25:34.765 idle_polls: 7729 00:25:34.765 sock_completions: 3678 00:25:34.765 nvme_completions: 6585 00:25:34.765 submitted_requests: 9932 00:25:34.765 queued_requests: 1 00:25:34.765 ======================================================== 00:25:34.765 Latency(us) 00:25:34.765 Device Information : IOPS MiB/s Average min max 00:25:34.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1589.42 397.36 82341.16 52271.94 141946.45 00:25:34.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1645.92 411.48 78626.17 47372.55 125112.36 00:25:34.765 ======================================================== 00:25:34.765 Total : 3235.34 808.83 80451.23 47372.55 141946.45 00:25:34.765 00:25:34.765 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:34.765 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.022 rmmod nvme_tcp 00:25:35.022 rmmod nvme_fabrics 00:25:35.022 rmmod nvme_keyring 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:35.022 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 227116 ']' 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 227116 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 227116 ']' 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 227116 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227116 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227116' 00:25:35.023 killing process with pid 227116 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 227116 00:25:35.023 05:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 227116 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.920 05:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.827 00:25:38.827 real 0m25.584s 00:25:38.827 user 1m5.495s 00:25:38.827 sys 0m8.922s 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:38.827 ************************************ 00:25:38.827 END TEST nvmf_perf 00:25:38.827 ************************************ 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.827 ************************************ 00:25:38.827 START TEST nvmf_fio_host 00:25:38.827 ************************************ 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:38.827 * Looking for test storage... 00:25:38.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.827 --rc genhtml_branch_coverage=1 00:25:38.827 --rc genhtml_function_coverage=1 00:25:38.827 --rc genhtml_legend=1 00:25:38.827 --rc geninfo_all_blocks=1 00:25:38.827 --rc geninfo_unexecuted_blocks=1 00:25:38.827 00:25:38.827 ' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.827 --rc genhtml_branch_coverage=1 00:25:38.827 --rc genhtml_function_coverage=1 00:25:38.827 --rc genhtml_legend=1 00:25:38.827 --rc geninfo_all_blocks=1 00:25:38.827 --rc geninfo_unexecuted_blocks=1 00:25:38.827 00:25:38.827 ' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.827 --rc genhtml_branch_coverage=1 00:25:38.827 --rc genhtml_function_coverage=1 00:25:38.827 --rc genhtml_legend=1 00:25:38.827 --rc geninfo_all_blocks=1 00:25:38.827 --rc geninfo_unexecuted_blocks=1 00:25:38.827 00:25:38.827 ' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.827 --rc genhtml_branch_coverage=1 00:25:38.827 --rc genhtml_function_coverage=1 00:25:38.827 --rc genhtml_legend=1 00:25:38.827 --rc geninfo_all_blocks=1 00:25:38.827 --rc geninfo_unexecuted_blocks=1 00:25:38.827 00:25:38.827 ' 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.827 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.828 05:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:45.396 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:45.396 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:45.396 Found net devices under 0000:af:00.0: cvl_0_0 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:45.396 Found net devices under 0000:af:00.1: cvl_0_1 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.396 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:25:45.656 00:25:45.656 --- 10.0.0.2 ping statistics --- 00:25:45.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.656 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:45.656 00:25:45.656 --- 10.0.0.1 ping statistics --- 00:25:45.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.656 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=233816 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 233816 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 233816 ']' 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.656 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 [2024-12-10 05:51:03.596117] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:25:45.656 [2024-12-10 05:51:03.596159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.914 [2024-12-10 05:51:03.679182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.914 [2024-12-10 05:51:03.719834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.914 [2024-12-10 05:51:03.719871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.914 [2024-12-10 05:51:03.719879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.914 [2024-12-10 05:51:03.719884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.914 [2024-12-10 05:51:03.719889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.914 [2024-12-10 05:51:03.721285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.914 [2024-12-10 05:51:03.721396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.914 [2024-12-10 05:51:03.721417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.914 [2024-12-10 05:51:03.721419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.914 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.914 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:45.914 05:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:46.171 [2024-12-10 05:51:03.990367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.171 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:46.171 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:46.171 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.171 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:46.427 Malloc1 00:25:46.427 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.684 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:46.940 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.940 [2024-12-10 05:51:04.832634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.940 05:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:47.197 05:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:47.453 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:47.453 fio-3.35 00:25:47.453 Starting 1 thread 00:25:49.973 [2024-12-10 05:51:07.865319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b890 is same with the state(6) to be set 00:25:49.973 [2024-12-10 05:51:07.865373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b890 is same with the state(6) to be set 00:25:49.973 [2024-12-10 05:51:07.865381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b890 is same with the state(6) to be set 00:25:49.973 [2024-12-10 05:51:07.865388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b890 is same with the state(6) to be set 00:25:49.973 [2024-12-10 05:51:07.865395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b890 is same with the state(6) to be set 00:25:49.973 [2024-12-10 05:51:07.865400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b890 is same with the state(6) to be set 00:25:49.973 00:25:49.973 test: (groupid=0, jobs=1): err= 0: pid=234400: Tue Dec 10 05:51:07 2024 00:25:49.973 read: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec) 00:25:49.973 slat (nsec): min=1508, max=290665, avg=1824.89, stdev=2389.02 00:25:49.973 clat (usec): min=2569, max=10414, avg=5931.08, stdev=454.32 00:25:49.973 lat (usec): min=2600, max=10416, avg=5932.90, stdev=454.13 00:25:49.973 clat percentiles (usec): 00:25:49.973 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:25:49.973 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:25:49.973 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:25:49.973 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9634], 00:25:49.973 | 99.99th=[10421] 00:25:49.973 bw ( KiB/s): min=46120, max=48048, per=99.95%, avg=47346.00, stdev=877.19, samples=4 00:25:49.973 iops : min=11530, max=12012, avg=11836.50, stdev=219.30, samples=4 00:25:49.973 write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(92.3MiB/2005msec); 0 zone resets 00:25:49.973 slat (nsec): min=1565, max=156679, avg=1898.36, stdev=1555.36 00:25:49.973 clat (usec): min=2072, max=9480, avg=4836.90, stdev=377.11 00:25:49.973 lat (usec): min=2087, max=9482, avg=4838.80, stdev=376.99 00:25:49.973 clat percentiles (usec): 00:25:49.973 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:25:49.973 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:25:49.973 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:25:49.973 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 7701], 99.95th=[ 8455], 00:25:49.973 | 99.99th=[ 9110] 00:25:49.973 bw ( KiB/s): min=46664, max=47744, per=100.00%, avg=47154.00, stdev=456.79, samples=4 00:25:49.973 iops : min=11666, max=11936, avg=11788.50, stdev=114.20, samples=4 00:25:49.973 lat (msec) : 4=0.57%, 10=99.41%, 20=0.02% 00:25:49.973 cpu : usr=72.16%, sys=24.45%, ctx=534, majf=0, minf=3 00:25:49.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:49.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:49.973 issued rwts: total=23744,23633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:49.973 00:25:49.973 Run status group 0 (all jobs): 00:25:49.973 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:25:49.973 WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=92.3MiB (96.8MB), run=2005-2005msec 00:25:49.973 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:49.973 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:49.974 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:50.249 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:50.250 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:50.250 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:50.250 05:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:50.507 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:50.507 fio-3.35 00:25:50.507 Starting 1 thread 00:25:53.034 00:25:53.034 test: (groupid=0, jobs=1): err= 0: pid=235350: Tue Dec 10 05:51:10 2024 00:25:53.034 read: IOPS=11.0k, BW=172MiB/s (180MB/s)(344MiB/2007msec) 00:25:53.034 slat (nsec): min=2456, max=92501, avg=2782.77, stdev=1248.30 00:25:53.034 clat (usec): min=1876, max=13167, avg=6719.64, stdev=1576.79 00:25:53.034 lat (usec): min=1879, max=13169, avg=6722.42, stdev=1576.89 00:25:53.034 clat percentiles (usec): 00:25:53.034 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5276], 00:25:53.034 | 30.00th=[ 5735], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7177], 00:25:53.034 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9503], 00:25:53.034 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12125], 99.95th=[12911], 00:25:53.034 | 99.99th=[13042] 00:25:53.034 bw ( KiB/s): min=85856, max=96832, per=51.14%, avg=89824.00, stdev=5026.14, samples=4 00:25:53.034 iops : min= 5366, max= 6052, avg=5614.00, stdev=314.13, samples=4 00:25:53.034 write: IOPS=6632, BW=104MiB/s (109MB/s)(184MiB/1774msec); 0 zone resets 00:25:53.034 slat (usec): min=28, max=381, avg=31.22, stdev= 6.91 00:25:53.034 clat (usec): min=4144, max=15039, avg=8555.12, stdev=1472.14 00:25:53.034 lat (usec): min=4173, max=15070, avg=8586.34, stdev=1473.33 00:25:53.034 clat percentiles (usec): 00:25:53.034 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:25:53.034 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:25:53.034 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:25:53.034 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13960], 99.95th=[14222], 00:25:53.034 | 99.99th=[14615] 00:25:53.034 bw ( KiB/s): min=90144, max=100800, per=88.20%, avg=93600.00, stdev=5001.09, samples=4 00:25:53.034 iops : min= 5634, max= 6300, avg=5850.00, stdev=312.57, samples=4 00:25:53.034 lat (msec) : 2=0.01%, 4=1.90%, 10=90.44%, 20=7.65% 00:25:53.034 cpu : usr=86.55%, sys=12.76%, ctx=35, majf=0, minf=3 00:25:53.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:53.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.034 issued rwts: total=22031,11766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.034 00:25:53.034 Run status group 0 (all jobs): 00:25:53.034 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2007-2007msec 00:25:53.034 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=184MiB (193MB), run=1774-1774msec 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.034 rmmod nvme_tcp 00:25:53.034 rmmod nvme_fabrics 00:25:53.034 rmmod nvme_keyring 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 233816 ']' 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 233816 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 233816 ']' 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 233816 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233816 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233816' 00:25:53.034 killing process with pid 233816 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 233816 00:25:53.034 05:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 233816 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.293 05:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.830 00:25:55.830 real 0m16.676s 00:25:55.830 user 0m46.819s 00:25:55.830 sys 0m7.058s 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.830 ************************************ 00:25:55.830 END TEST nvmf_fio_host 00:25:55.830 ************************************ 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.830 ************************************ 00:25:55.830 START TEST nvmf_failover 00:25:55.830 ************************************ 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:55.830 * Looking for test storage... 00:25:55.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:55.830 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.831 --rc genhtml_branch_coverage=1 00:25:55.831 --rc genhtml_function_coverage=1 00:25:55.831 --rc genhtml_legend=1 00:25:55.831 --rc geninfo_all_blocks=1 00:25:55.831 --rc geninfo_unexecuted_blocks=1 00:25:55.831 00:25:55.831 ' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.831 --rc genhtml_branch_coverage=1 00:25:55.831 --rc genhtml_function_coverage=1 00:25:55.831 --rc genhtml_legend=1 00:25:55.831 --rc geninfo_all_blocks=1 00:25:55.831 --rc geninfo_unexecuted_blocks=1 00:25:55.831 00:25:55.831 ' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.831 --rc genhtml_branch_coverage=1 00:25:55.831 --rc genhtml_function_coverage=1 00:25:55.831 --rc genhtml_legend=1 00:25:55.831 --rc geninfo_all_blocks=1 00:25:55.831 --rc geninfo_unexecuted_blocks=1 00:25:55.831 00:25:55.831 ' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.831 --rc genhtml_branch_coverage=1 00:25:55.831 --rc genhtml_function_coverage=1 00:25:55.831 --rc genhtml_legend=1 00:25:55.831 --rc geninfo_all_blocks=1 00:25:55.831 --rc geninfo_unexecuted_blocks=1 00:25:55.831 00:25:55.831 ' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.831 05:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:02.437 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:02.438 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:02.438 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:02.438 Found net devices under 0000:af:00.0: cvl_0_0 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:02.438 Found net devices under 0000:af:00.1: cvl_0_1 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.438 05:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:26:02.438 00:26:02.438 --- 10.0.0.2 ping statistics --- 00:26:02.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.438 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:26:02.438 00:26:02.438 --- 10.0.0.1 ping statistics --- 00:26:02.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.438 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=239588 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 239588 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 239588 ']' 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.438 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.439 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.439 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.439 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.439 [2024-12-10 05:51:20.249085] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:26:02.439 [2024-12-10 05:51:20.249126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.439 [2024-12-10 05:51:20.331453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:02.439 [2024-12-10 05:51:20.371310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.439 [2024-12-10 05:51:20.371345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.439 [2024-12-10 05:51:20.371353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.439 [2024-12-10 05:51:20.371359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.439 [2024-12-10 05:51:20.371364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.439 [2024-12-10 05:51:20.372780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.439 [2024-12-10 05:51:20.372888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.439 [2024-12-10 05:51:20.372889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.697 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:02.955 [2024-12-10 05:51:20.674203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.955 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:02.955 Malloc0 00:26:03.212 05:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.212 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:03.470 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.728 [2024-12-10 05:51:21.505192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.728 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:03.985 [2024-12-10 05:51:21.705739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:03.985 [2024-12-10 05:51:21.890333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=239847 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 239847 /var/tmp/bdevperf.sock 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 239847 ']' 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.985 05:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:04.242 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.242 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:04.242 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:04.807 NVMe0n1 00:26:04.807 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:05.065 00:26:05.065 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=240070 00:26:05.065 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.065 05:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:05.996 05:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.254 [2024-12-10 05:51:23.986825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.254 [2024-12-10 05:51:23.986957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.986999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 [2024-12-10 05:51:23.987250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b8b0 is same with the state(6) to be set 00:26:06.255 05:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:09.532 05:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.532 00:26:09.532 05:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.791 [2024-12-10 05:51:27.523741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.523997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 [2024-12-10 05:51:27.524104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c6b0 is same with the state(6) to be set 00:26:09.791 05:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:13.068 05:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.068 [2024-12-10 05:51:30.738550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.068 05:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:13.999 05:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:13.999 [2024-12-10 05:51:31.953010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:13.999 [2024-12-10 05:51:31.953047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:13.999 [2024-12-10 05:51:31.953055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:13.999 [2024-12-10 05:51:31.953061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:13.999 [2024-12-10 05:51:31.953067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:13.999 [2024-12-10 05:51:31.953073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:13.999 [2024-12-10 05:51:31.953079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8a30 is same with the state(6) to be set 00:26:14.257 05:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 240070 00:26:20.813 { 00:26:20.813 "results": [ 00:26:20.813 { 00:26:20.813 "job": "NVMe0n1", 00:26:20.813 "core_mask": "0x1", 00:26:20.813 "workload": "verify", 00:26:20.813 "status": "finished", 00:26:20.813 "verify_range": { 00:26:20.813 "start": 0, 00:26:20.813 "length": 16384 00:26:20.813 }, 00:26:20.813 "queue_depth": 128, 00:26:20.813 "io_size": 4096, 00:26:20.813 "runtime": 15.004912, 00:26:20.813 "iops": 11220.125782810323, 00:26:20.813 "mibps": 43.82861633910282, 00:26:20.813 "io_failed": 8165, 00:26:20.814 "io_timeout": 0, 00:26:20.814 "avg_latency_us": 10858.460446090356, 00:26:20.814 "min_latency_us": 425.2038095238095, 00:26:20.814 "max_latency_us": 20721.859047619047 00:26:20.814 } 00:26:20.814 ], 00:26:20.814 "core_count": 1 00:26:20.814 } 00:26:20.814 05:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 239847 00:26:20.814 05:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 239847 ']' 00:26:20.814 05:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 239847 00:26:20.814 05:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:20.814 05:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.814 05:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239847 00:26:20.814 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.814 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.814 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239847' 00:26:20.814 killing process with pid 239847 00:26:20.814 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 239847 00:26:20.814 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 239847 00:26:20.814 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:20.814 [2024-12-10 05:51:21.956193] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:26:20.814 [2024-12-10 05:51:21.956262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239847 ] 00:26:20.814 [2024-12-10 05:51:22.037404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.814 [2024-12-10 05:51:22.077366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.814 Running I/O for 15 seconds... 00:26:20.814 11303.00 IOPS, 44.15 MiB/s [2024-12-10T04:51:38.773Z] [2024-12-10 05:51:23.987531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.987985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.987993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.988000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.988007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.988013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.814 [2024-12-10 05:51:23.988021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.814 [2024-12-10 05:51:23.988027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.815 [2024-12-10 05:51:23.988344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.815 [2024-12-10 05:51:23.988559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.815 [2024-12-10 05:51:23.988573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.815 [2024-12-10 05:51:23.988587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.815 [2024-12-10 05:51:23.988601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.815 [2024-12-10 05:51:23.988615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.815 [2024-12-10 05:51:23.988623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.988992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.988999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.816 [2024-12-10 05:51:23.989187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.816 [2024-12-10 05:51:23.989195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.817 [2024-12-10 05:51:23.989406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.817 [2024-12-10 05:51:23.989445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.817 [2024-12-10 05:51:23.989451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:26:20.817 [2024-12-10 05:51:23.989458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989501] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:20.817 [2024-12-10 05:51:23.989521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.817 [2024-12-10 05:51:23.989528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.817 [2024-12-10 05:51:23.989542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.817 [2024-12-10 05:51:23.989555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.817 [2024-12-10 05:51:23.989568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:23.989575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:20.817 [2024-12-10 05:51:23.989601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d6930 (9): Bad file descriptor 00:26:20.817 [2024-12-10 05:51:23.992373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:20.817 [2024-12-10 05:51:24.014494] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:20.817 11205.50 IOPS, 43.77 MiB/s [2024-12-10T04:51:38.776Z] 11269.00 IOPS, 44.02 MiB/s [2024-12-10T04:51:38.776Z] 11296.00 IOPS, 44.12 MiB/s [2024-12-10T04:51:38.776Z] [2024-12-10 05:51:27.524724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.524986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.524994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.525001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.817 [2024-12-10 05:51:27.525009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.817 [2024-12-10 05:51:27.525015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.818 [2024-12-10 05:51:27.525453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.818 [2024-12-10 05:51:27.525467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.818 [2024-12-10 05:51:27.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.818 [2024-12-10 05:51:27.525489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.818 [2024-12-10 05:51:27.525496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.819 [2024-12-10 05:51:27.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.525992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.526006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.526013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.526019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.526027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.526033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.526040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.526047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.526056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.526062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.819 [2024-12-10 05:51:27.526070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.819 [2024-12-10 05:51:27.526076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.820 [2024-12-10 05:51:27.526375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35800 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35808 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35816 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35824 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35832 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35840 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35848 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35856 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35864 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35872 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35880 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.820 [2024-12-10 05:51:27.526659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.820 [2024-12-10 05:51:27.526664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.820 [2024-12-10 05:51:27.526669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35888 len:8 PRP1 0x0 PRP2 0x0 00:26:20.820 [2024-12-10 05:51:27.526675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.526681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.821 [2024-12-10 05:51:27.526686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.821 [2024-12-10 05:51:27.526691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35896 len:8 PRP1 0x0 PRP2 0x0 00:26:20.821 [2024-12-10 05:51:27.526698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.526704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.821 [2024-12-10 05:51:27.526709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.821 [2024-12-10 05:51:27.538137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35904 len:8 PRP1 0x0 PRP2 0x0 00:26:20.821 [2024-12-10 05:51:27.538149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.538158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.821 [2024-12-10 05:51:27.538163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.821 [2024-12-10 05:51:27.538170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35912 len:8 PRP1 0x0 PRP2 0x0 00:26:20.821 [2024-12-10 05:51:27.538177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.538224] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:20.821 [2024-12-10 05:51:27.538248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-12-10 05:51:27.538256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.538264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-12-10 05:51:27.538271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.538280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-12-10 05:51:27.538292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.538299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-12-10 05:51:27.538306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:27.538314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:20.821 [2024-12-10 05:51:27.538347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d6930 (9): Bad file descriptor 00:26:20.821 [2024-12-10 05:51:27.541476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:20.821 [2024-12-10 05:51:27.611554] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:20.821 11082.80 IOPS, 43.29 MiB/s [2024-12-10T04:51:38.780Z] 11145.00 IOPS, 43.54 MiB/s [2024-12-10T04:51:38.780Z] 11166.29 IOPS, 43.62 MiB/s [2024-12-10T04:51:38.780Z] 11205.00 IOPS, 43.77 MiB/s [2024-12-10T04:51:38.780Z] 11225.89 IOPS, 43.85 MiB/s [2024-12-10T04:51:38.780Z] [2024-12-10 05:51:31.953360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.821 [2024-12-10 05:51:31.953803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-12-10 05:51:31.953811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.953991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.953999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.822 [2024-12-10 05:51:31.954210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.822 [2024-12-10 05:51:31.954323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.822 [2024-12-10 05:51:31.954330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.823 [2024-12-10 05:51:31.954787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.823 [2024-12-10 05:51:31.954914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.823 [2024-12-10 05:51:31.954920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.954928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.954934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.954942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.954948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.954956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.954963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.954971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.954977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.954984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.954992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.955008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.824 [2024-12-10 05:51:31.955022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63640 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63656 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63664 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63672 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63680 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63688 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63704 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63712 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63720 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63728 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63736 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63744 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63752 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.824 [2024-12-10 05:51:31.955397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.824 [2024-12-10 05:51:31.955403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63760 len:8 PRP1 0x0 PRP2 0x0 00:26:20.824 [2024-12-10 05:51:31.955409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955450] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:20.824 [2024-12-10 05:51:31.955471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.824 [2024-12-10 05:51:31.955478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.824 [2024-12-10 05:51:31.955494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.824 [2024-12-10 05:51:31.955507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.824 [2024-12-10 05:51:31.955520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.824 [2024-12-10 05:51:31.955526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:20.824 [2024-12-10 05:51:31.955548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d6930 (9): Bad file descriptor 00:26:20.824 [2024-12-10 05:51:31.958305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:20.824 [2024-12-10 05:51:32.028787] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:20.824 11157.20 IOPS, 43.58 MiB/s [2024-12-10T04:51:38.783Z] 11161.00 IOPS, 43.60 MiB/s [2024-12-10T04:51:38.783Z] 11181.50 IOPS, 43.68 MiB/s [2024-12-10T04:51:38.783Z] 11199.46 IOPS, 43.75 MiB/s [2024-12-10T04:51:38.784Z] 11218.64 IOPS, 43.82 MiB/s 00:26:20.825 Latency(us) 00:26:20.825 [2024-12-10T04:51:38.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.825 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:20.825 Verification LBA range: start 0x0 length 0x4000 00:26:20.825 NVMe0n1 : 15.00 11220.13 43.83 544.16 0.00 10858.46 425.20 20721.86 00:26:20.825 [2024-12-10T04:51:38.784Z] =================================================================================================================== 00:26:20.825 [2024-12-10T04:51:38.784Z] Total : 11220.13 43.83 544.16 0.00 10858.46 425.20 20721.86 00:26:20.825 Received shutdown signal, test time was about 15.000000 seconds 00:26:20.825 00:26:20.825 Latency(us) 00:26:20.825 [2024-12-10T04:51:38.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.825 [2024-12-10T04:51:38.784Z] =================================================================================================================== 00:26:20.825 [2024-12-10T04:51:38.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=242515 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 242515 /var/tmp/bdevperf.sock 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 242515 ']' 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:20.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:20.825 [2024-12-10 05:51:38.601374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:20.825 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:21.082 [2024-12-10 05:51:38.785910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:21.082 05:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:21.339 NVMe0n1 00:26:21.339 05:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:21.597 00:26:21.597 05:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:22.160 00:26:22.160 05:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:22.160 05:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:22.160 05:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.417 05:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:25.693 05:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.693 05:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:25.693 05:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:25.693 05:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=243255 00:26:25.693 05:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 243255 00:26:26.625 { 00:26:26.625 "results": [ 00:26:26.625 { 00:26:26.625 "job": "NVMe0n1", 00:26:26.625 "core_mask": "0x1", 00:26:26.625 "workload": "verify", 00:26:26.625 "status": "finished", 00:26:26.625 "verify_range": { 00:26:26.625 "start": 0, 00:26:26.625 "length": 16384 00:26:26.625 }, 00:26:26.625 "queue_depth": 128, 00:26:26.625 "io_size": 4096, 00:26:26.625 "runtime": 1.004866, 00:26:26.625 "iops": 11308.970549307072, 00:26:26.625 "mibps": 44.17566620823075, 00:26:26.625 "io_failed": 0, 00:26:26.625 "io_timeout": 0, 00:26:26.625 "avg_latency_us": 11276.466639178023, 00:26:26.625 "min_latency_us": 955.7333333333333, 00:26:26.625 "max_latency_us": 11234.742857142857 00:26:26.625 } 00:26:26.625 ], 00:26:26.625 "core_count": 1 00:26:26.625 } 00:26:26.882 05:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:26.882 [2024-12-10 05:51:38.215759] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:26:26.883 [2024-12-10 05:51:38.215813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242515 ] 00:26:26.883 [2024-12-10 05:51:38.299099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.883 [2024-12-10 05:51:38.335679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.883 [2024-12-10 05:51:40.231144] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:26.883 [2024-12-10 05:51:40.231193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.883 [2024-12-10 05:51:40.231203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.883 [2024-12-10 05:51:40.231212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.883 [2024-12-10 05:51:40.231223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.883 [2024-12-10 05:51:40.231230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.883 [2024-12-10 05:51:40.231237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.883 [2024-12-10 05:51:40.231244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.883 [2024-12-10 05:51:40.231251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.883 [2024-12-10 05:51:40.231258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:26.883 [2024-12-10 05:51:40.231285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:26.883 [2024-12-10 05:51:40.231299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc94930 (9): Bad file descriptor 00:26:26.883 [2024-12-10 05:51:40.284451] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:26.883 Running I/O for 1 seconds... 00:26:26.883 11236.00 IOPS, 43.89 MiB/s 00:26:26.883 Latency(us) 00:26:26.883 [2024-12-10T04:51:44.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.883 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:26.883 Verification LBA range: start 0x0 length 0x4000 00:26:26.883 NVMe0n1 : 1.00 11308.97 44.18 0.00 0.00 11276.47 955.73 11234.74 00:26:26.883 [2024-12-10T04:51:44.842Z] =================================================================================================================== 00:26:26.883 [2024-12-10T04:51:44.842Z] Total : 11308.97 44.18 0.00 0.00 11276.47 955.73 11234.74 00:26:26.883 05:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:26.883 05:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:26.883 05:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:27.141 05:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:27.141 05:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:27.397 05:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:27.655 05:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 242515 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 242515 ']' 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 242515 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 242515 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 242515' 00:26:30.936 killing process with pid 242515 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 242515 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 242515 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:30.936 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.194 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:31.194 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.194 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:31.194 05:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.194 rmmod nvme_tcp 00:26:31.194 rmmod nvme_fabrics 00:26:31.194 rmmod nvme_keyring 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 239588 ']' 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 239588 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 239588 ']' 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 239588 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239588 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239588' 00:26:31.194 killing process with pid 239588 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 239588 00:26:31.194 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 239588 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.454 05:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.989 00:26:33.989 real 0m38.104s 00:26:33.989 user 1m58.471s 00:26:33.989 sys 0m8.407s 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:33.989 ************************************ 00:26:33.989 END TEST nvmf_failover 00:26:33.989 ************************************ 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.989 ************************************ 00:26:33.989 START TEST nvmf_host_discovery 00:26:33.989 ************************************ 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:33.989 * Looking for test storage... 00:26:33.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:33.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.989 --rc genhtml_branch_coverage=1 00:26:33.989 --rc genhtml_function_coverage=1 00:26:33.989 --rc genhtml_legend=1 00:26:33.989 --rc geninfo_all_blocks=1 00:26:33.989 --rc geninfo_unexecuted_blocks=1 00:26:33.989 00:26:33.989 ' 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:33.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.989 --rc genhtml_branch_coverage=1 00:26:33.989 --rc genhtml_function_coverage=1 00:26:33.989 --rc genhtml_legend=1 00:26:33.989 --rc geninfo_all_blocks=1 00:26:33.989 --rc geninfo_unexecuted_blocks=1 00:26:33.989 00:26:33.989 ' 00:26:33.989 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:33.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.989 --rc genhtml_branch_coverage=1 00:26:33.989 --rc genhtml_function_coverage=1 00:26:33.989 --rc genhtml_legend=1 00:26:33.989 --rc geninfo_all_blocks=1 00:26:33.989 --rc geninfo_unexecuted_blocks=1 00:26:33.989 00:26:33.990 ' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:33.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.990 --rc genhtml_branch_coverage=1 00:26:33.990 --rc genhtml_function_coverage=1 00:26:33.990 --rc genhtml_legend=1 00:26:33.990 --rc geninfo_all_blocks=1 00:26:33.990 --rc geninfo_unexecuted_blocks=1 00:26:33.990 00:26:33.990 ' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.990 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.556 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.556 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.556 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:40.557 00:26:40.557 --- 10.0.0.2 ping statistics --- 00:26:40.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.557 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:26:40.557 00:26:40.557 --- 10.0.0.1 ping statistics --- 00:26:40.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.557 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.557 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=248166 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 248166 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 248166 ']' 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.816 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 [2024-12-10 05:51:58.562351] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:26:40.816 [2024-12-10 05:51:58.562395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.816 [2024-12-10 05:51:58.643947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.816 [2024-12-10 05:51:58.681209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.816 [2024-12-10 05:51:58.681245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.816 [2024-12-10 05:51:58.681253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.816 [2024-12-10 05:51:58.681258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.816 [2024-12-10 05:51:58.681263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.816 [2024-12-10 05:51:58.681814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.074 [2024-12-10 05:51:58.825275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.074 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.075 [2024-12-10 05:51:58.837436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.075 null0 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.075 null1 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=248192 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 248192 /tmp/host.sock 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 248192 ']' 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:41.075 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.075 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.075 [2024-12-10 05:51:58.913500] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:26:41.075 [2024-12-10 05:51:58.913539] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248192 ] 00:26:41.075 [2024-12-10 05:51:58.993105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.333 [2024-12-10 05:51:59.032675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.898 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:42.156 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.156 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.156 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.157 05:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 [2024-12-10 05:52:00.040590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.157 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:42.415 05:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:42.981 [2024-12-10 05:52:00.754188] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.981 [2024-12-10 05:52:00.754209] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.981 [2024-12-10 05:52:00.754227] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.981 [2024-12-10 05:52:00.880594] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:43.239 [2024-12-10 05:52:01.057492] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:43.239 [2024-12-10 05:52:01.058236] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15582e0:1 started. 00:26:43.239 [2024-12-10 05:52:01.059558] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.239 [2024-12-10 05:52:01.059573] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:43.239 [2024-12-10 05:52:01.063653] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15582e0 was disconnected and freed. delete nvme_qpair. 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:43.497 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.498 [2024-12-10 05:52:01.429642] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1558660:1 started. 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.498 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.755 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.755 [2024-12-10 05:52:01.475708] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1558660 was disconnected and freed. delete nvme_qpair. 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.756 [2024-12-10 05:52:01.532538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:43.756 [2024-12-10 05:52:01.533377] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:43.756 [2024-12-10 05:52:01.533395] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.756 [2024-12-10 05:52:01.661775] bdev_nvme.c:7434:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:43.756 05:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:44.013 [2024-12-10 05:52:01.964963] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:44.013 [2024-12-10 05:52:01.964999] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:44.013 [2024-12-10 05:52:01.965011] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:44.013 [2024-12-10 05:52:01.965016] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 [2024-12-10 05:52:02.784404] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:44.948 [2024-12-10 05:52:02.784425] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:44.948 [2024-12-10 05:52:02.791847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.948 [2024-12-10 05:52:02.791864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.948 [2024-12-10 05:52:02.791873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.948 [2024-12-10 05:52:02.791880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.948 [2024-12-10 05:52:02.791888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.948 [2024-12-10 05:52:02.791895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.948 [2024-12-10 05:52:02.791902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.948 [2024-12-10 05:52:02.791908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.948 [2024-12-10 05:52:02.791915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:44.948 [2024-12-10 05:52:02.801862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.948 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 [2024-12-10 05:52:02.811897] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.948 [2024-12-10 05:52:02.811908] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.948 [2024-12-10 05:52:02.811915] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.948 [2024-12-10 05:52:02.811920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.948 [2024-12-10 05:52:02.811935] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.948 [2024-12-10 05:52:02.812179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.948 [2024-12-10 05:52:02.812196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528790 with addr=10.0.0.2, port=4420 00:26:44.948 [2024-12-10 05:52:02.812204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.948 [2024-12-10 05:52:02.812215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.948 [2024-12-10 05:52:02.812230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.948 [2024-12-10 05:52:02.812236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.948 [2024-12-10 05:52:02.812244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.948 [2024-12-10 05:52:02.812250] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.948 [2024-12-10 05:52:02.812255] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.948 [2024-12-10 05:52:02.812260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.948 [2024-12-10 05:52:02.821966] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.948 [2024-12-10 05:52:02.821976] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.948 [2024-12-10 05:52:02.821981] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.948 [2024-12-10 05:52:02.821985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.948 [2024-12-10 05:52:02.821997] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.948 [2024-12-10 05:52:02.822234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.948 [2024-12-10 05:52:02.822247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528790 with addr=10.0.0.2, port=4420 00:26:44.948 [2024-12-10 05:52:02.822254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.948 [2024-12-10 05:52:02.822264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.948 [2024-12-10 05:52:02.822274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.948 [2024-12-10 05:52:02.822279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.949 [2024-12-10 05:52:02.822286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.949 [2024-12-10 05:52:02.822291] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.949 [2024-12-10 05:52:02.822296] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.949 [2024-12-10 05:52:02.822300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.949 [2024-12-10 05:52:02.832028] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.949 [2024-12-10 05:52:02.832041] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.949 [2024-12-10 05:52:02.832045] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.832049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.949 [2024-12-10 05:52:02.832062] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.832308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.949 [2024-12-10 05:52:02.832322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528790 with addr=10.0.0.2, port=4420 00:26:44.949 [2024-12-10 05:52:02.832329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.949 [2024-12-10 05:52:02.832339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.949 [2024-12-10 05:52:02.832349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.949 [2024-12-10 05:52:02.832355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.949 [2024-12-10 05:52:02.832362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.949 [2024-12-10 05:52:02.832367] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.949 [2024-12-10 05:52:02.832372] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.949 [2024-12-10 05:52:02.832376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:44.949 [2024-12-10 05:52:02.842092] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.949 [2024-12-10 05:52:02.842104] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.949 [2024-12-10 05:52:02.842108] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.842112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.949 [2024-12-10 05:52:02.842124] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.842337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.949 [2024-12-10 05:52:02.842349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528790 with addr=10.0.0.2, port=4420 00:26:44.949 [2024-12-10 05:52:02.842356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.949 [2024-12-10 05:52:02.842366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.949 [2024-12-10 05:52:02.842376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.949 [2024-12-10 05:52:02.842382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.949 [2024-12-10 05:52:02.842388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.949 [2024-12-10 05:52:02.842394] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.949 [2024-12-10 05:52:02.842401] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.949 [2024-12-10 05:52:02.842405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.949 [2024-12-10 05:52:02.852154] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.949 [2024-12-10 05:52:02.852167] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.949 [2024-12-10 05:52:02.852171] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.852175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.949 [2024-12-10 05:52:02.852189] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.852422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.949 [2024-12-10 05:52:02.852435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528790 with addr=10.0.0.2, port=4420 00:26:44.949 [2024-12-10 05:52:02.852443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.949 [2024-12-10 05:52:02.852453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.949 [2024-12-10 05:52:02.852463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.949 [2024-12-10 05:52:02.852469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.949 [2024-12-10 05:52:02.852476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.949 [2024-12-10 05:52:02.852482] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.949 [2024-12-10 05:52:02.852486] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.949 [2024-12-10 05:52:02.852490] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.949 [2024-12-10 05:52:02.862222] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.949 [2024-12-10 05:52:02.862233] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.949 [2024-12-10 05:52:02.862237] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.862241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.949 [2024-12-10 05:52:02.862253] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.949 [2024-12-10 05:52:02.862412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.949 [2024-12-10 05:52:02.862424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528790 with addr=10.0.0.2, port=4420 00:26:44.949 [2024-12-10 05:52:02.862437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528790 is same with the state(6) to be set 00:26:44.949 [2024-12-10 05:52:02.862446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528790 (9): Bad file descriptor 00:26:44.949 [2024-12-10 05:52:02.862456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.949 [2024-12-10 05:52:02.862462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.949 [2024-12-10 05:52:02.862468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.949 [2024-12-10 05:52:02.862473] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.949 [2024-12-10 05:52:02.862477] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.949 [2024-12-10 05:52:02.862481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.949 [2024-12-10 05:52:02.870664] bdev_nvme.c:7297:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:44.949 [2024-12-10 05:52:02.870679] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.949 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.207 05:52:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:45.207 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:45.207 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.208 05:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 [2024-12-10 05:52:04.193673] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:46.581 [2024-12-10 05:52:04.193689] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:46.581 [2024-12-10 05:52:04.193699] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:46.581 [2024-12-10 05:52:04.279952] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:46.839 [2024-12-10 05:52:04.579188] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:46.839 [2024-12-10 05:52:04.579735] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1563730:1 started. 00:26:46.839 [2024-12-10 05:52:04.581208] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:46.839 [2024-12-10 05:52:04.581239] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:46.839 [2024-12-10 05:52:04.582573] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1563730 was disconnected and freed. delete nvme_qpair. 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.839 request: 00:26:46.839 { 00:26:46.839 "name": "nvme", 00:26:46.839 "trtype": "tcp", 00:26:46.839 "traddr": "10.0.0.2", 00:26:46.839 "adrfam": "ipv4", 00:26:46.839 "trsvcid": "8009", 00:26:46.839 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:46.839 "wait_for_attach": true, 00:26:46.839 "method": "bdev_nvme_start_discovery", 00:26:46.839 "req_id": 1 00:26:46.839 } 00:26:46.839 Got JSON-RPC error response 00:26:46.839 response: 00:26:46.839 { 00:26:46.839 "code": -17, 00:26:46.839 "message": "File exists" 00:26:46.839 } 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.839 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.839 request: 00:26:46.839 { 00:26:46.839 "name": "nvme_second", 00:26:46.839 "trtype": "tcp", 00:26:46.839 "traddr": "10.0.0.2", 00:26:46.839 "adrfam": "ipv4", 00:26:46.839 "trsvcid": "8009", 00:26:46.839 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:46.840 "wait_for_attach": true, 00:26:46.840 "method": "bdev_nvme_start_discovery", 00:26:46.840 "req_id": 1 00:26:46.840 } 00:26:46.840 Got JSON-RPC error response 00:26:46.840 response: 00:26:46.840 { 00:26:46.840 "code": -17, 00:26:46.840 "message": "File exists" 00:26:46.840 } 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.840 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.098 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.033 [2024-12-10 05:52:05.825284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.033 [2024-12-10 05:52:05.825310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15406a0 with addr=10.0.0.2, port=8010 00:26:48.033 [2024-12-10 05:52:05.825322] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:48.033 [2024-12-10 05:52:05.825328] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:48.033 [2024-12-10 05:52:05.825334] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:48.965 [2024-12-10 05:52:06.827735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.965 [2024-12-10 05:52:06.827760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15406a0 with addr=10.0.0.2, port=8010 00:26:48.965 [2024-12-10 05:52:06.827771] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:48.965 [2024-12-10 05:52:06.827777] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:48.965 [2024-12-10 05:52:06.827783] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:49.897 [2024-12-10 05:52:07.829913] bdev_nvme.c:7553:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:49.897 request: 00:26:49.897 { 00:26:49.897 "name": "nvme_second", 00:26:49.897 "trtype": "tcp", 00:26:49.897 "traddr": "10.0.0.2", 00:26:49.897 "adrfam": "ipv4", 00:26:49.897 "trsvcid": "8010", 00:26:49.897 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:49.897 "wait_for_attach": false, 00:26:49.897 "attach_timeout_ms": 3000, 00:26:49.897 "method": "bdev_nvme_start_discovery", 00:26:49.897 "req_id": 1 00:26:49.897 } 00:26:49.897 Got JSON-RPC error response 00:26:49.897 response: 00:26:49.897 { 00:26:49.897 "code": -110, 00:26:49.897 "message": "Connection timed out" 00:26:49.897 } 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:49.897 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 248192 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:50.156 rmmod nvme_tcp 00:26:50.156 rmmod nvme_fabrics 00:26:50.156 rmmod nvme_keyring 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 248166 ']' 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 248166 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 248166 ']' 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 248166 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:50.156 05:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248166 00:26:50.156 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:50.156 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:50.156 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248166' 00:26:50.156 killing process with pid 248166 00:26:50.156 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 248166 00:26:50.156 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 248166 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.414 05:52:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.317 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.317 00:26:52.317 real 0m18.816s 00:26:52.317 user 0m22.086s 00:26:52.317 sys 0m6.547s 00:26:52.317 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:52.317 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.317 ************************************ 00:26:52.317 END TEST nvmf_host_discovery 00:26:52.317 ************************************ 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.576 ************************************ 00:26:52.576 START TEST nvmf_host_multipath_status 00:26:52.576 ************************************ 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:52.576 * Looking for test storage... 00:26:52.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:52.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.576 --rc genhtml_branch_coverage=1 00:26:52.576 --rc genhtml_function_coverage=1 00:26:52.576 --rc genhtml_legend=1 00:26:52.576 --rc geninfo_all_blocks=1 00:26:52.576 --rc geninfo_unexecuted_blocks=1 00:26:52.576 00:26:52.576 ' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:52.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.576 --rc genhtml_branch_coverage=1 00:26:52.576 --rc genhtml_function_coverage=1 00:26:52.576 --rc genhtml_legend=1 00:26:52.576 --rc geninfo_all_blocks=1 00:26:52.576 --rc geninfo_unexecuted_blocks=1 00:26:52.576 00:26:52.576 ' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:52.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.576 --rc genhtml_branch_coverage=1 00:26:52.576 --rc genhtml_function_coverage=1 00:26:52.576 --rc genhtml_legend=1 00:26:52.576 --rc geninfo_all_blocks=1 00:26:52.576 --rc geninfo_unexecuted_blocks=1 00:26:52.576 00:26:52.576 ' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:52.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.576 --rc genhtml_branch_coverage=1 00:26:52.576 --rc genhtml_function_coverage=1 00:26:52.576 --rc genhtml_legend=1 00:26:52.576 --rc geninfo_all_blocks=1 00:26:52.576 --rc geninfo_unexecuted_blocks=1 00:26:52.576 00:26:52.576 ' 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.576 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.577 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.835 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:52.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:52.836 05:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:59.496 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:59.496 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:59.496 Found net devices under 0000:af:00.0: cvl_0_0 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:59.496 Found net devices under 0000:af:00.1: cvl_0_1 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:59.496 05:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.496 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:59.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:26:59.497 00:26:59.497 --- 10.0.0.2 ping statistics --- 00:26:59.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.497 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:26:59.497 00:26:59.497 --- 10.0.0.1 ping statistics --- 00:26:59.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.497 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=253734 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 253734 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 253734 ']' 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.497 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:59.497 [2024-12-10 05:52:17.358270] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:26:59.497 [2024-12-10 05:52:17.358311] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.497 [2024-12-10 05:52:17.426823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:59.756 [2024-12-10 05:52:17.468121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.756 [2024-12-10 05:52:17.468157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.756 [2024-12-10 05:52:17.468165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.756 [2024-12-10 05:52:17.468170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.756 [2024-12-10 05:52:17.468175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.756 [2024-12-10 05:52:17.473235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.756 [2024-12-10 05:52:17.473239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=253734 00:26:59.756 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:00.015 [2024-12-10 05:52:17.781917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.015 05:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:00.274 Malloc0 00:27:00.274 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:00.533 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:00.533 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.791 [2024-12-10 05:52:18.601296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.791 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:01.049 [2024-12-10 05:52:18.793768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=253984 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 253984 /var/tmp/bdevperf.sock 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 253984 ']' 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:01.049 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.050 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:01.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:01.050 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.050 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:01.308 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.308 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:01.308 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:01.567 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:01.825 Nvme0n1 00:27:01.825 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:02.393 Nvme0n1 00:27:02.393 05:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:02.393 05:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:04.296 05:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:04.296 05:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:04.555 05:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.814 05:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:05.750 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:05.750 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:05.750 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.750 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.009 05:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.267 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.267 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.267 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.267 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.526 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.526 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:06.526 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.526 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:06.784 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.784 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:06.784 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.784 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.043 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.043 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:07.043 05:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:07.302 05:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:07.302 05:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.678 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.937 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.937 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.937 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.937 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.196 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.196 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.196 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.196 05:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.196 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.196 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.196 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.196 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.455 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.455 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.455 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.455 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.713 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.713 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:09.713 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:09.972 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:10.231 05:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:11.167 05:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:11.167 05:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.167 05:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.167 05:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.426 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.684 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.684 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.684 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.684 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:11.943 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.943 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:11.943 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.943 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.202 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.202 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.202 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.202 05:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.460 05:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.460 05:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:12.460 05:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:12.460 05:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:12.719 05:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.095 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:14.096 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.096 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:14.096 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.096 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:14.354 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.354 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:14.354 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.354 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:14.613 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.613 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:14.613 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:14.613 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.872 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.872 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:14.872 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.872 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:15.131 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.131 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:15.131 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:15.131 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:15.389 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:16.323 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:16.323 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:16.323 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.323 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:16.581 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.581 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:16.581 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:16.581 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.839 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.839 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:16.839 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.839 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:17.097 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.097 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:17.097 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.097 05:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.355 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:17.612 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.612 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:17.612 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:17.870 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:18.128 05:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:19.062 05:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:19.062 05:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:19.062 05:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.062 05:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:19.320 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:19.320 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:19.320 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.320 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:19.578 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.578 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:19.578 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:19.578 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.836 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:20.094 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.094 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:20.094 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.094 05:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:20.352 05:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.352 05:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:20.611 05:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:20.611 05:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:20.869 05:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:20.869 05:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:22.242 05:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:22.242 05:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:22.242 05:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.242 05:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:22.242 05:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.242 05:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:22.242 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.242 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:22.500 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.500 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:22.501 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.501 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:22.501 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.501 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:22.501 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.501 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:22.758 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.758 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:22.758 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.758 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.016 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.016 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.016 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.016 05:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.275 05:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.275 05:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:23.275 05:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:23.533 05:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:23.533 05:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:24.906 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.164 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.164 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.164 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.164 05:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.422 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:25.679 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.679 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:25.680 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.680 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:25.938 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.938 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:25.938 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:26.196 05:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:26.454 05:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:27.387 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:27.387 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:27.387 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.387 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.645 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.645 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:27.645 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.645 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.645 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.645 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.903 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:27.903 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.903 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.903 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:27.903 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.903 05:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.160 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.160 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:28.160 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.161 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.419 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.419 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:28.419 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.419 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.677 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.677 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:28.677 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:28.677 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:28.935 05:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:30.309 05:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:30.310 05:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:30.310 05:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.310 05:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:30.310 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.568 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:30.568 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.568 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:30.568 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:30.568 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.826 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.826 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:30.826 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.826 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.084 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.084 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:31.084 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.084 05:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 253984 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 253984 ']' 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 253984 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253984 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253984' 00:27:31.343 killing process with pid 253984 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 253984 00:27:31.343 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 253984 00:27:31.343 { 00:27:31.343 "results": [ 00:27:31.343 { 00:27:31.343 "job": "Nvme0n1", 00:27:31.343 "core_mask": "0x4", 00:27:31.343 "workload": "verify", 00:27:31.343 "status": "terminated", 00:27:31.343 "verify_range": { 00:27:31.343 "start": 0, 00:27:31.343 "length": 16384 00:27:31.343 }, 00:27:31.343 "queue_depth": 128, 00:27:31.343 "io_size": 4096, 00:27:31.343 "runtime": 28.927228, 00:27:31.343 "iops": 10821.119811410896, 00:27:31.343 "mibps": 42.26999926332381, 00:27:31.343 "io_failed": 0, 00:27:31.343 "io_timeout": 0, 00:27:31.343 "avg_latency_us": 11808.16113568291, 00:27:31.343 "min_latency_us": 550.0342857142857, 00:27:31.343 "max_latency_us": 3019898.88 00:27:31.343 } 00:27:31.343 ], 00:27:31.343 "core_count": 1 00:27:31.343 } 00:27:31.604 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 253984 00:27:31.604 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:31.604 [2024-12-10 05:52:18.870400] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:27:31.604 [2024-12-10 05:52:18.870452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253984 ] 00:27:31.604 [2024-12-10 05:52:18.934766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.604 [2024-12-10 05:52:18.973865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.604 Running I/O for 90 seconds... 00:27:31.604 11747.00 IOPS, 45.89 MiB/s [2024-12-10T04:52:49.563Z] 11767.50 IOPS, 45.97 MiB/s [2024-12-10T04:52:49.563Z] 11692.67 IOPS, 45.67 MiB/s [2024-12-10T04:52:49.563Z] 11707.50 IOPS, 45.73 MiB/s [2024-12-10T04:52:49.563Z] 11678.60 IOPS, 45.62 MiB/s [2024-12-10T04:52:49.563Z] 11650.50 IOPS, 45.51 MiB/s [2024-12-10T04:52:49.563Z] 11663.57 IOPS, 45.56 MiB/s [2024-12-10T04:52:49.563Z] 11675.25 IOPS, 45.61 MiB/s [2024-12-10T04:52:49.563Z] 11677.67 IOPS, 45.62 MiB/s [2024-12-10T04:52:49.563Z] 11648.90 IOPS, 45.50 MiB/s [2024-12-10T04:52:49.563Z] 11633.45 IOPS, 45.44 MiB/s [2024-12-10T04:52:49.563Z] 11627.00 IOPS, 45.42 MiB/s [2024-12-10T04:52:49.563Z] [2024-12-10 05:52:33.028547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.604 [2024-12-10 05:52:33.028588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.604 [2024-12-10 05:52:33.028639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.028965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.028971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.030210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.605 [2024-12-10 05:52:33.030530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.605 [2024-12-10 05:52:33.030667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:31.605 [2024-12-10 05:52:33.030679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.030980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.030987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.606 [2024-12-10 05:52:33.031572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:31.606 [2024-12-10 05:52:33.031643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.031987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.031994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.607 [2024-12-10 05:52:33.032499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:31.607 [2024-12-10 05:52:33.032537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.607 [2024-12-10 05:52:33.032545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:33.032720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.608 [2024-12-10 05:52:33.032727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.608 11467.00 IOPS, 44.79 MiB/s [2024-12-10T04:52:49.567Z] 10647.93 IOPS, 41.59 MiB/s [2024-12-10T04:52:49.567Z] 9938.07 IOPS, 38.82 MiB/s [2024-12-10T04:52:49.567Z] 9446.69 IOPS, 36.90 MiB/s [2024-12-10T04:52:49.567Z] 9567.71 IOPS, 37.37 MiB/s [2024-12-10T04:52:49.567Z] 9676.56 IOPS, 37.80 MiB/s [2024-12-10T04:52:49.567Z] 9852.21 IOPS, 38.49 MiB/s [2024-12-10T04:52:49.567Z] 10046.50 IOPS, 39.24 MiB/s [2024-12-10T04:52:49.567Z] 10228.86 IOPS, 39.96 MiB/s [2024-12-10T04:52:49.567Z] 10299.23 IOPS, 40.23 MiB/s [2024-12-10T04:52:49.567Z] 10355.52 IOPS, 40.45 MiB/s [2024-12-10T04:52:49.567Z] 10410.83 IOPS, 40.67 MiB/s [2024-12-10T04:52:49.567Z] 10554.28 IOPS, 41.23 MiB/s [2024-12-10T04:52:49.567Z] 10686.04 IOPS, 41.74 MiB/s [2024-12-10T04:52:49.567Z] [2024-12-10 05:52:46.800892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.800934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.800967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.800976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.800989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.801987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.802002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.802008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.802021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.608 [2024-12-10 05:52:46.802028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:31.608 [2024-12-10 05:52:46.802040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:31.609 [2024-12-10 05:52:46.802828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.609 [2024-12-10 05:52:46.802835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:31.610 [2024-12-10 05:52:46.802847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.610 [2024-12-10 05:52:46.802856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:31.610 10767.15 IOPS, 42.06 MiB/s [2024-12-10T04:52:49.569Z] 10800.93 IOPS, 42.19 MiB/s [2024-12-10T04:52:49.569Z] Received shutdown signal, test time was about 28.927864 seconds 00:27:31.610 00:27:31.610 Latency(us) 00:27:31.610 [2024-12-10T04:52:49.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.610 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:31.610 Verification LBA range: start 0x0 length 0x4000 00:27:31.610 Nvme0n1 : 28.93 10821.12 42.27 0.00 0.00 11808.16 550.03 3019898.88 00:27:31.610 [2024-12-10T04:52:49.569Z] =================================================================================================================== 00:27:31.610 [2024-12-10T04:52:49.569Z] Total : 10821.12 42.27 0.00 0.00 11808.16 550.03 3019898.88 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.610 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.610 rmmod nvme_tcp 00:27:31.868 rmmod nvme_fabrics 00:27:31.868 rmmod nvme_keyring 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 253734 ']' 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 253734 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 253734 ']' 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 253734 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253734 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253734' 00:27:31.868 killing process with pid 253734 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 253734 00:27:31.868 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 253734 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.127 05:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.031 00:27:34.031 real 0m41.572s 00:27:34.031 user 1m51.038s 00:27:34.031 sys 0m11.922s 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:34.031 ************************************ 00:27:34.031 END TEST nvmf_host_multipath_status 00:27:34.031 ************************************ 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.031 05:52:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.031 ************************************ 00:27:34.031 START TEST nvmf_discovery_remove_ifc 00:27:34.031 ************************************ 00:27:34.290 05:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:34.290 * Looking for test storage... 00:27:34.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:34.290 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.291 --rc genhtml_branch_coverage=1 00:27:34.291 --rc genhtml_function_coverage=1 00:27:34.291 --rc genhtml_legend=1 00:27:34.291 --rc geninfo_all_blocks=1 00:27:34.291 --rc geninfo_unexecuted_blocks=1 00:27:34.291 00:27:34.291 ' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.291 --rc genhtml_branch_coverage=1 00:27:34.291 --rc genhtml_function_coverage=1 00:27:34.291 --rc genhtml_legend=1 00:27:34.291 --rc geninfo_all_blocks=1 00:27:34.291 --rc geninfo_unexecuted_blocks=1 00:27:34.291 00:27:34.291 ' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.291 --rc genhtml_branch_coverage=1 00:27:34.291 --rc genhtml_function_coverage=1 00:27:34.291 --rc genhtml_legend=1 00:27:34.291 --rc geninfo_all_blocks=1 00:27:34.291 --rc geninfo_unexecuted_blocks=1 00:27:34.291 00:27:34.291 ' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.291 --rc genhtml_branch_coverage=1 00:27:34.291 --rc genhtml_function_coverage=1 00:27:34.291 --rc genhtml_legend=1 00:27:34.291 --rc geninfo_all_blocks=1 00:27:34.291 --rc geninfo_unexecuted_blocks=1 00:27:34.291 00:27:34.291 ' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:34.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.291 05:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:40.861 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.861 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:40.862 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:40.862 Found net devices under 0000:af:00.0: cvl_0_0 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:40.862 Found net devices under 0000:af:00.1: cvl_0_1 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.862 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:27:41.122 00:27:41.122 --- 10.0.0.2 ping statistics --- 00:27:41.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.122 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:41.122 00:27:41.122 --- 10.0.0.1 ping statistics --- 00:27:41.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.122 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=262931 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 262931 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 262931 ']' 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.122 05:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.122 [2024-12-10 05:52:58.998867] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:27:41.122 [2024-12-10 05:52:58.998911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.381 [2024-12-10 05:52:59.085354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.381 [2024-12-10 05:52:59.124803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.381 [2024-12-10 05:52:59.124837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.381 [2024-12-10 05:52:59.124844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.381 [2024-12-10 05:52:59.124850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.381 [2024-12-10 05:52:59.124856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.381 [2024-12-10 05:52:59.125400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.381 [2024-12-10 05:52:59.267895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.381 [2024-12-10 05:52:59.276058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:41.381 null0 00:27:41.381 [2024-12-10 05:52:59.308044] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=263125 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 263125 /tmp/host.sock 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 263125 ']' 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:41.381 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.381 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.640 [2024-12-10 05:52:59.376880] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:27:41.640 [2024-12-10 05:52:59.376921] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263125 ] 00:27:41.640 [2024-12-10 05:52:59.455983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.640 [2024-12-10 05:52:59.496387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.640 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.898 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.898 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:41.898 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.898 05:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.954 [2024-12-10 05:53:00.623983] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:42.954 [2024-12-10 05:53:00.624008] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:42.954 [2024-12-10 05:53:00.624019] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:42.954 [2024-12-10 05:53:00.751416] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:43.250 [2024-12-10 05:53:00.936413] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:43.250 [2024-12-10 05:53:00.937176] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11f2190:1 started. 00:27:43.250 [2024-12-10 05:53:00.938495] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:43.250 [2024-12-10 05:53:00.938538] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:43.250 [2024-12-10 05:53:00.938557] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:43.250 [2024-12-10 05:53:00.938570] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:43.250 [2024-12-10 05:53:00.938588] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.250 [2024-12-10 05:53:00.943496] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11f2190 was disconnected and freed. delete nvme_qpair. 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:43.250 05:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.250 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.251 05:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.185 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.444 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.444 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:44.444 05:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:45.378 05:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.312 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.570 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:46.570 05:53:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:47.504 05:53:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.437 [2024-12-10 05:53:06.380280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:48.437 [2024-12-10 05:53:06.380324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.437 [2024-12-10 05:53:06.380351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.437 [2024-12-10 05:53:06.380360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.437 [2024-12-10 05:53:06.380366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.437 [2024-12-10 05:53:06.380373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.437 [2024-12-10 05:53:06.380380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.437 [2024-12-10 05:53:06.380387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.437 [2024-12-10 05:53:06.380393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.437 [2024-12-10 05:53:06.380400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.437 [2024-12-10 05:53:06.380406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.437 [2024-12-10 05:53:06.380413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ce950 is same with the state(6) to be set 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:48.437 05:53:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.437 [2024-12-10 05:53:06.390303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ce950 (9): Bad file descriptor 00:27:48.695 [2024-12-10 05:53:06.400338] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:48.695 [2024-12-10 05:53:06.400350] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:48.695 [2024-12-10 05:53:06.400356] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:48.695 [2024-12-10 05:53:06.400361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:48.695 [2024-12-10 05:53:06.400383] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.629 [2024-12-10 05:53:07.418251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:49.629 [2024-12-10 05:53:07.418319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ce950 with addr=10.0.0.2, port=4420 00:27:49.629 [2024-12-10 05:53:07.418350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ce950 is same with the state(6) to be set 00:27:49.629 [2024-12-10 05:53:07.418408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ce950 (9): Bad file descriptor 00:27:49.629 [2024-12-10 05:53:07.419361] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:49.629 [2024-12-10 05:53:07.419422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:49.629 [2024-12-10 05:53:07.419446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:49.629 [2024-12-10 05:53:07.419467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:49.629 [2024-12-10 05:53:07.419487] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:49.629 [2024-12-10 05:53:07.419504] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:49.629 [2024-12-10 05:53:07.419517] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:49.629 [2024-12-10 05:53:07.419538] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:49.629 [2024-12-10 05:53:07.419553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.629 05:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:50.564 [2024-12-10 05:53:08.422065] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:50.564 [2024-12-10 05:53:08.422084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.564 [2024-12-10 05:53:08.422094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.564 [2024-12-10 05:53:08.422101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.564 [2024-12-10 05:53:08.422108] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:50.564 [2024-12-10 05:53:08.422114] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:50.564 [2024-12-10 05:53:08.422135] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:50.564 [2024-12-10 05:53:08.422139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.564 [2024-12-10 05:53:08.422158] bdev_nvme.c:7261:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:50.564 [2024-12-10 05:53:08.422176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.564 [2024-12-10 05:53:08.422185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.564 [2024-12-10 05:53:08.422194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.564 [2024-12-10 05:53:08.422200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.564 [2024-12-10 05:53:08.422208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.564 [2024-12-10 05:53:08.422214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.564 [2024-12-10 05:53:08.422229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.564 [2024-12-10 05:53:08.422235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.564 [2024-12-10 05:53:08.422242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.564 [2024-12-10 05:53:08.422249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.564 [2024-12-10 05:53:08.422255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:50.564 [2024-12-10 05:53:08.422566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bdc60 (9): Bad file descriptor 00:27:50.565 [2024-12-10 05:53:08.423574] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:50.565 [2024-12-10 05:53:08.423585] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.565 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:50.823 05:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:51.758 05:53:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.692 [2024-12-10 05:53:10.479784] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:52.692 [2024-12-10 05:53:10.479802] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:52.692 [2024-12-10 05:53:10.479814] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:52.692 [2024-12-10 05:53:10.606175] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:52.951 05:53:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.951 [2024-12-10 05:53:10.821170] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:52.951 [2024-12-10 05:53:10.821775] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x11c9170:1 started. 00:27:52.951 [2024-12-10 05:53:10.822818] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:52.951 [2024-12-10 05:53:10.822851] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:52.951 [2024-12-10 05:53:10.822868] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:52.951 [2024-12-10 05:53:10.822881] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:52.951 [2024-12-10 05:53:10.822888] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:52.951 [2024-12-10 05:53:10.828632] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x11c9170 was disconnected and freed. delete nvme_qpair. 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 263125 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 263125 ']' 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 263125 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263125 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263125' 00:27:53.886 killing process with pid 263125 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 263125 00:27:53.886 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 263125 00:27:54.144 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:54.144 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:54.144 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:54.144 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.145 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:54.145 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.145 05:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.145 rmmod nvme_tcp 00:27:54.145 rmmod nvme_fabrics 00:27:54.145 rmmod nvme_keyring 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 262931 ']' 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 262931 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 262931 ']' 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 262931 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 262931 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 262931' 00:27:54.145 killing process with pid 262931 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 262931 00:27:54.145 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 262931 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.404 05:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.938 00:27:56.938 real 0m22.353s 00:27:56.938 user 0m27.110s 00:27:56.938 sys 0m6.430s 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.938 ************************************ 00:27:56.938 END TEST nvmf_discovery_remove_ifc 00:27:56.938 ************************************ 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.938 ************************************ 00:27:56.938 START TEST nvmf_identify_kernel_target 00:27:56.938 ************************************ 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:56.938 * Looking for test storage... 00:27:56.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.938 --rc genhtml_branch_coverage=1 00:27:56.938 --rc genhtml_function_coverage=1 00:27:56.938 --rc genhtml_legend=1 00:27:56.938 --rc geninfo_all_blocks=1 00:27:56.938 --rc geninfo_unexecuted_blocks=1 00:27:56.938 00:27:56.938 ' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.938 --rc genhtml_branch_coverage=1 00:27:56.938 --rc genhtml_function_coverage=1 00:27:56.938 --rc genhtml_legend=1 00:27:56.938 --rc geninfo_all_blocks=1 00:27:56.938 --rc geninfo_unexecuted_blocks=1 00:27:56.938 00:27:56.938 ' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.938 --rc genhtml_branch_coverage=1 00:27:56.938 --rc genhtml_function_coverage=1 00:27:56.938 --rc genhtml_legend=1 00:27:56.938 --rc geninfo_all_blocks=1 00:27:56.938 --rc geninfo_unexecuted_blocks=1 00:27:56.938 00:27:56.938 ' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.938 --rc genhtml_branch_coverage=1 00:27:56.938 --rc genhtml_function_coverage=1 00:27:56.938 --rc genhtml_legend=1 00:27:56.938 --rc geninfo_all_blocks=1 00:27:56.938 --rc geninfo_unexecuted_blocks=1 00:27:56.938 00:27:56.938 ' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.938 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.939 05:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:03.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:03.505 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.505 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:03.506 Found net devices under 0000:af:00.0: cvl_0_0 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:03.506 Found net devices under 0000:af:00.1: cvl_0_1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:28:03.506 00:28:03.506 --- 10.0.0.2 ping statistics --- 00:28:03.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.506 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:03.506 00:28:03.506 --- 10.0.0.1 ping statistics --- 00:28:03.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.506 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:03.506 05:53:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:06.798 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:28:06.798 Waiting for block devices as requested 00:28:06.798 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:07.057 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:07.057 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:07.057 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:07.316 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:07.316 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:07.316 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:07.575 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:07.575 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:07.575 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:07.575 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:07.833 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:07.833 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:07.833 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:08.092 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:08.092 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:08.092 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:08.092 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:08.351 No valid GPT data, bailing 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:28:08.351 No valid GPT data, bailing 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:08.351 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:08.351 00:28:08.351 Discovery Log Number of Records 2, Generation counter 2 00:28:08.351 =====Discovery Log Entry 0====== 00:28:08.351 trtype: tcp 00:28:08.351 adrfam: ipv4 00:28:08.351 subtype: current discovery subsystem 00:28:08.351 treq: not specified, sq flow control disable supported 00:28:08.351 portid: 1 00:28:08.351 trsvcid: 4420 00:28:08.351 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:08.351 traddr: 10.0.0.1 00:28:08.351 eflags: none 00:28:08.351 sectype: none 00:28:08.352 =====Discovery Log Entry 1====== 00:28:08.352 trtype: tcp 00:28:08.352 adrfam: ipv4 00:28:08.352 subtype: nvme subsystem 00:28:08.352 treq: not specified, sq flow control disable supported 00:28:08.352 portid: 1 00:28:08.352 trsvcid: 4420 00:28:08.352 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:08.352 traddr: 10.0.0.1 00:28:08.352 eflags: none 00:28:08.352 sectype: none 00:28:08.352 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:08.352 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:08.611 ===================================================== 00:28:08.611 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:08.611 ===================================================== 00:28:08.611 Controller Capabilities/Features 00:28:08.611 ================================ 00:28:08.611 Vendor ID: 0000 00:28:08.611 Subsystem Vendor ID: 0000 00:28:08.611 Serial Number: 602884de77055292d979 00:28:08.611 Model Number: Linux 00:28:08.611 Firmware Version: 6.8.9-20 00:28:08.611 Recommended Arb Burst: 0 00:28:08.611 IEEE OUI Identifier: 00 00 00 00:28:08.611 Multi-path I/O 00:28:08.611 May have multiple subsystem ports: No 00:28:08.611 May have multiple controllers: No 00:28:08.611 Associated with SR-IOV VF: No 00:28:08.611 Max Data Transfer Size: Unlimited 00:28:08.611 Max Number of Namespaces: 0 00:28:08.611 Max Number of I/O Queues: 1024 00:28:08.611 NVMe Specification Version (VS): 1.3 00:28:08.611 NVMe Specification Version (Identify): 1.3 00:28:08.611 Maximum Queue Entries: 1024 00:28:08.611 Contiguous Queues Required: No 00:28:08.611 Arbitration Mechanisms Supported 00:28:08.611 Weighted Round Robin: Not Supported 00:28:08.611 Vendor Specific: Not Supported 00:28:08.611 Reset Timeout: 7500 ms 00:28:08.611 Doorbell Stride: 4 bytes 00:28:08.611 NVM Subsystem Reset: Not Supported 00:28:08.611 Command Sets Supported 00:28:08.611 NVM Command Set: Supported 00:28:08.611 Boot Partition: Not Supported 00:28:08.611 Memory Page Size Minimum: 4096 bytes 00:28:08.611 Memory Page Size Maximum: 4096 bytes 00:28:08.612 Persistent Memory Region: Not Supported 00:28:08.612 Optional Asynchronous Events Supported 00:28:08.612 Namespace Attribute Notices: Not Supported 00:28:08.612 Firmware Activation Notices: Not Supported 00:28:08.612 ANA Change Notices: Not Supported 00:28:08.612 PLE Aggregate Log Change Notices: Not Supported 00:28:08.612 LBA Status Info Alert Notices: Not Supported 00:28:08.612 EGE Aggregate Log Change Notices: Not Supported 00:28:08.612 Normal NVM Subsystem Shutdown event: Not Supported 00:28:08.612 Zone Descriptor Change Notices: Not Supported 00:28:08.612 Discovery Log Change Notices: Supported 00:28:08.612 Controller Attributes 00:28:08.612 128-bit Host Identifier: Not Supported 00:28:08.612 Non-Operational Permissive Mode: Not Supported 00:28:08.612 NVM Sets: Not Supported 00:28:08.612 Read Recovery Levels: Not Supported 00:28:08.612 Endurance Groups: Not Supported 00:28:08.612 Predictable Latency Mode: Not Supported 00:28:08.612 Traffic Based Keep ALive: Not Supported 00:28:08.612 Namespace Granularity: Not Supported 00:28:08.612 SQ Associations: Not Supported 00:28:08.612 UUID List: Not Supported 00:28:08.612 Multi-Domain Subsystem: Not Supported 00:28:08.612 Fixed Capacity Management: Not Supported 00:28:08.612 Variable Capacity Management: Not Supported 00:28:08.612 Delete Endurance Group: Not Supported 00:28:08.612 Delete NVM Set: Not Supported 00:28:08.612 Extended LBA Formats Supported: Not Supported 00:28:08.612 Flexible Data Placement Supported: Not Supported 00:28:08.612 00:28:08.612 Controller Memory Buffer Support 00:28:08.612 ================================ 00:28:08.612 Supported: No 00:28:08.612 00:28:08.612 Persistent Memory Region Support 00:28:08.612 ================================ 00:28:08.612 Supported: No 00:28:08.612 00:28:08.612 Admin Command Set Attributes 00:28:08.612 ============================ 00:28:08.612 Security Send/Receive: Not Supported 00:28:08.612 Format NVM: Not Supported 00:28:08.612 Firmware Activate/Download: Not Supported 00:28:08.612 Namespace Management: Not Supported 00:28:08.612 Device Self-Test: Not Supported 00:28:08.612 Directives: Not Supported 00:28:08.612 NVMe-MI: Not Supported 00:28:08.612 Virtualization Management: Not Supported 00:28:08.612 Doorbell Buffer Config: Not Supported 00:28:08.612 Get LBA Status Capability: Not Supported 00:28:08.612 Command & Feature Lockdown Capability: Not Supported 00:28:08.612 Abort Command Limit: 1 00:28:08.612 Async Event Request Limit: 1 00:28:08.612 Number of Firmware Slots: N/A 00:28:08.612 Firmware Slot 1 Read-Only: N/A 00:28:08.612 Firmware Activation Without Reset: N/A 00:28:08.612 Multiple Update Detection Support: N/A 00:28:08.612 Firmware Update Granularity: No Information Provided 00:28:08.612 Per-Namespace SMART Log: No 00:28:08.612 Asymmetric Namespace Access Log Page: Not Supported 00:28:08.612 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:08.612 Command Effects Log Page: Not Supported 00:28:08.612 Get Log Page Extended Data: Supported 00:28:08.612 Telemetry Log Pages: Not Supported 00:28:08.612 Persistent Event Log Pages: Not Supported 00:28:08.612 Supported Log Pages Log Page: May Support 00:28:08.612 Commands Supported & Effects Log Page: Not Supported 00:28:08.612 Feature Identifiers & Effects Log Page:May Support 00:28:08.612 NVMe-MI Commands & Effects Log Page: May Support 00:28:08.612 Data Area 4 for Telemetry Log: Not Supported 00:28:08.612 Error Log Page Entries Supported: 1 00:28:08.612 Keep Alive: Not Supported 00:28:08.612 00:28:08.612 NVM Command Set Attributes 00:28:08.612 ========================== 00:28:08.612 Submission Queue Entry Size 00:28:08.612 Max: 1 00:28:08.612 Min: 1 00:28:08.612 Completion Queue Entry Size 00:28:08.612 Max: 1 00:28:08.612 Min: 1 00:28:08.612 Number of Namespaces: 0 00:28:08.612 Compare Command: Not Supported 00:28:08.612 Write Uncorrectable Command: Not Supported 00:28:08.612 Dataset Management Command: Not Supported 00:28:08.612 Write Zeroes Command: Not Supported 00:28:08.612 Set Features Save Field: Not Supported 00:28:08.612 Reservations: Not Supported 00:28:08.612 Timestamp: Not Supported 00:28:08.612 Copy: Not Supported 00:28:08.612 Volatile Write Cache: Not Present 00:28:08.612 Atomic Write Unit (Normal): 1 00:28:08.612 Atomic Write Unit (PFail): 1 00:28:08.612 Atomic Compare & Write Unit: 1 00:28:08.612 Fused Compare & Write: Not Supported 00:28:08.612 Scatter-Gather List 00:28:08.612 SGL Command Set: Supported 00:28:08.612 SGL Keyed: Not Supported 00:28:08.612 SGL Bit Bucket Descriptor: Not Supported 00:28:08.612 SGL Metadata Pointer: Not Supported 00:28:08.612 Oversized SGL: Not Supported 00:28:08.612 SGL Metadata Address: Not Supported 00:28:08.612 SGL Offset: Supported 00:28:08.612 Transport SGL Data Block: Not Supported 00:28:08.612 Replay Protected Memory Block: Not Supported 00:28:08.612 00:28:08.612 Firmware Slot Information 00:28:08.612 ========================= 00:28:08.612 Active slot: 0 00:28:08.612 00:28:08.612 00:28:08.612 Error Log 00:28:08.612 ========= 00:28:08.612 00:28:08.612 Active Namespaces 00:28:08.612 ================= 00:28:08.612 Discovery Log Page 00:28:08.612 ================== 00:28:08.612 Generation Counter: 2 00:28:08.612 Number of Records: 2 00:28:08.612 Record Format: 0 00:28:08.612 00:28:08.612 Discovery Log Entry 0 00:28:08.612 ---------------------- 00:28:08.612 Transport Type: 3 (TCP) 00:28:08.612 Address Family: 1 (IPv4) 00:28:08.612 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:08.612 Entry Flags: 00:28:08.612 Duplicate Returned Information: 0 00:28:08.612 Explicit Persistent Connection Support for Discovery: 0 00:28:08.612 Transport Requirements: 00:28:08.612 Secure Channel: Not Specified 00:28:08.612 Port ID: 1 (0x0001) 00:28:08.612 Controller ID: 65535 (0xffff) 00:28:08.612 Admin Max SQ Size: 32 00:28:08.612 Transport Service Identifier: 4420 00:28:08.612 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:08.612 Transport Address: 10.0.0.1 00:28:08.612 Discovery Log Entry 1 00:28:08.612 ---------------------- 00:28:08.612 Transport Type: 3 (TCP) 00:28:08.612 Address Family: 1 (IPv4) 00:28:08.612 Subsystem Type: 2 (NVM Subsystem) 00:28:08.612 Entry Flags: 00:28:08.612 Duplicate Returned Information: 0 00:28:08.612 Explicit Persistent Connection Support for Discovery: 0 00:28:08.612 Transport Requirements: 00:28:08.612 Secure Channel: Not Specified 00:28:08.612 Port ID: 1 (0x0001) 00:28:08.612 Controller ID: 65535 (0xffff) 00:28:08.612 Admin Max SQ Size: 32 00:28:08.612 Transport Service Identifier: 4420 00:28:08.612 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:08.612 Transport Address: 10.0.0.1 00:28:08.612 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:08.612 get_feature(0x01) failed 00:28:08.612 get_feature(0x02) failed 00:28:08.612 get_feature(0x04) failed 00:28:08.612 ===================================================== 00:28:08.612 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:08.612 ===================================================== 00:28:08.612 Controller Capabilities/Features 00:28:08.612 ================================ 00:28:08.612 Vendor ID: 0000 00:28:08.612 Subsystem Vendor ID: 0000 00:28:08.612 Serial Number: 8da3665cac7e161045b4 00:28:08.612 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:08.612 Firmware Version: 6.8.9-20 00:28:08.612 Recommended Arb Burst: 6 00:28:08.612 IEEE OUI Identifier: 00 00 00 00:28:08.612 Multi-path I/O 00:28:08.612 May have multiple subsystem ports: Yes 00:28:08.612 May have multiple controllers: Yes 00:28:08.612 Associated with SR-IOV VF: No 00:28:08.612 Max Data Transfer Size: Unlimited 00:28:08.612 Max Number of Namespaces: 1024 00:28:08.612 Max Number of I/O Queues: 128 00:28:08.612 NVMe Specification Version (VS): 1.3 00:28:08.612 NVMe Specification Version (Identify): 1.3 00:28:08.612 Maximum Queue Entries: 1024 00:28:08.612 Contiguous Queues Required: No 00:28:08.612 Arbitration Mechanisms Supported 00:28:08.612 Weighted Round Robin: Not Supported 00:28:08.612 Vendor Specific: Not Supported 00:28:08.612 Reset Timeout: 7500 ms 00:28:08.612 Doorbell Stride: 4 bytes 00:28:08.612 NVM Subsystem Reset: Not Supported 00:28:08.612 Command Sets Supported 00:28:08.612 NVM Command Set: Supported 00:28:08.612 Boot Partition: Not Supported 00:28:08.612 Memory Page Size Minimum: 4096 bytes 00:28:08.612 Memory Page Size Maximum: 4096 bytes 00:28:08.612 Persistent Memory Region: Not Supported 00:28:08.612 Optional Asynchronous Events Supported 00:28:08.612 Namespace Attribute Notices: Supported 00:28:08.612 Firmware Activation Notices: Not Supported 00:28:08.612 ANA Change Notices: Supported 00:28:08.612 PLE Aggregate Log Change Notices: Not Supported 00:28:08.612 LBA Status Info Alert Notices: Not Supported 00:28:08.612 EGE Aggregate Log Change Notices: Not Supported 00:28:08.613 Normal NVM Subsystem Shutdown event: Not Supported 00:28:08.613 Zone Descriptor Change Notices: Not Supported 00:28:08.613 Discovery Log Change Notices: Not Supported 00:28:08.613 Controller Attributes 00:28:08.613 128-bit Host Identifier: Supported 00:28:08.613 Non-Operational Permissive Mode: Not Supported 00:28:08.613 NVM Sets: Not Supported 00:28:08.613 Read Recovery Levels: Not Supported 00:28:08.613 Endurance Groups: Not Supported 00:28:08.613 Predictable Latency Mode: Not Supported 00:28:08.613 Traffic Based Keep ALive: Supported 00:28:08.613 Namespace Granularity: Not Supported 00:28:08.613 SQ Associations: Not Supported 00:28:08.613 UUID List: Not Supported 00:28:08.613 Multi-Domain Subsystem: Not Supported 00:28:08.613 Fixed Capacity Management: Not Supported 00:28:08.613 Variable Capacity Management: Not Supported 00:28:08.613 Delete Endurance Group: Not Supported 00:28:08.613 Delete NVM Set: Not Supported 00:28:08.613 Extended LBA Formats Supported: Not Supported 00:28:08.613 Flexible Data Placement Supported: Not Supported 00:28:08.613 00:28:08.613 Controller Memory Buffer Support 00:28:08.613 ================================ 00:28:08.613 Supported: No 00:28:08.613 00:28:08.613 Persistent Memory Region Support 00:28:08.613 ================================ 00:28:08.613 Supported: No 00:28:08.613 00:28:08.613 Admin Command Set Attributes 00:28:08.613 ============================ 00:28:08.613 Security Send/Receive: Not Supported 00:28:08.613 Format NVM: Not Supported 00:28:08.613 Firmware Activate/Download: Not Supported 00:28:08.613 Namespace Management: Not Supported 00:28:08.613 Device Self-Test: Not Supported 00:28:08.613 Directives: Not Supported 00:28:08.613 NVMe-MI: Not Supported 00:28:08.613 Virtualization Management: Not Supported 00:28:08.613 Doorbell Buffer Config: Not Supported 00:28:08.613 Get LBA Status Capability: Not Supported 00:28:08.613 Command & Feature Lockdown Capability: Not Supported 00:28:08.613 Abort Command Limit: 4 00:28:08.613 Async Event Request Limit: 4 00:28:08.613 Number of Firmware Slots: N/A 00:28:08.613 Firmware Slot 1 Read-Only: N/A 00:28:08.613 Firmware Activation Without Reset: N/A 00:28:08.613 Multiple Update Detection Support: N/A 00:28:08.613 Firmware Update Granularity: No Information Provided 00:28:08.613 Per-Namespace SMART Log: Yes 00:28:08.613 Asymmetric Namespace Access Log Page: Supported 00:28:08.613 ANA Transition Time : 10 sec 00:28:08.613 00:28:08.613 Asymmetric Namespace Access Capabilities 00:28:08.613 ANA Optimized State : Supported 00:28:08.613 ANA Non-Optimized State : Supported 00:28:08.613 ANA Inaccessible State : Supported 00:28:08.613 ANA Persistent Loss State : Supported 00:28:08.613 ANA Change State : Supported 00:28:08.613 ANAGRPID is not changed : No 00:28:08.613 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:08.613 00:28:08.613 ANA Group Identifier Maximum : 128 00:28:08.613 Number of ANA Group Identifiers : 128 00:28:08.613 Max Number of Allowed Namespaces : 1024 00:28:08.613 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:08.613 Command Effects Log Page: Supported 00:28:08.613 Get Log Page Extended Data: Supported 00:28:08.613 Telemetry Log Pages: Not Supported 00:28:08.613 Persistent Event Log Pages: Not Supported 00:28:08.613 Supported Log Pages Log Page: May Support 00:28:08.613 Commands Supported & Effects Log Page: Not Supported 00:28:08.613 Feature Identifiers & Effects Log Page:May Support 00:28:08.613 NVMe-MI Commands & Effects Log Page: May Support 00:28:08.613 Data Area 4 for Telemetry Log: Not Supported 00:28:08.613 Error Log Page Entries Supported: 128 00:28:08.613 Keep Alive: Supported 00:28:08.613 Keep Alive Granularity: 1000 ms 00:28:08.613 00:28:08.613 NVM Command Set Attributes 00:28:08.613 ========================== 00:28:08.613 Submission Queue Entry Size 00:28:08.613 Max: 64 00:28:08.613 Min: 64 00:28:08.613 Completion Queue Entry Size 00:28:08.613 Max: 16 00:28:08.613 Min: 16 00:28:08.613 Number of Namespaces: 1024 00:28:08.613 Compare Command: Not Supported 00:28:08.613 Write Uncorrectable Command: Not Supported 00:28:08.613 Dataset Management Command: Supported 00:28:08.613 Write Zeroes Command: Supported 00:28:08.613 Set Features Save Field: Not Supported 00:28:08.613 Reservations: Not Supported 00:28:08.613 Timestamp: Not Supported 00:28:08.613 Copy: Not Supported 00:28:08.613 Volatile Write Cache: Present 00:28:08.613 Atomic Write Unit (Normal): 1 00:28:08.613 Atomic Write Unit (PFail): 1 00:28:08.613 Atomic Compare & Write Unit: 1 00:28:08.613 Fused Compare & Write: Not Supported 00:28:08.613 Scatter-Gather List 00:28:08.613 SGL Command Set: Supported 00:28:08.613 SGL Keyed: Not Supported 00:28:08.613 SGL Bit Bucket Descriptor: Not Supported 00:28:08.613 SGL Metadata Pointer: Not Supported 00:28:08.613 Oversized SGL: Not Supported 00:28:08.613 SGL Metadata Address: Not Supported 00:28:08.613 SGL Offset: Supported 00:28:08.613 Transport SGL Data Block: Not Supported 00:28:08.613 Replay Protected Memory Block: Not Supported 00:28:08.613 00:28:08.613 Firmware Slot Information 00:28:08.613 ========================= 00:28:08.613 Active slot: 0 00:28:08.613 00:28:08.613 Asymmetric Namespace Access 00:28:08.613 =========================== 00:28:08.613 Change Count : 0 00:28:08.613 Number of ANA Group Descriptors : 1 00:28:08.613 ANA Group Descriptor : 0 00:28:08.613 ANA Group ID : 1 00:28:08.613 Number of NSID Values : 1 00:28:08.613 Change Count : 0 00:28:08.613 ANA State : 1 00:28:08.613 Namespace Identifier : 1 00:28:08.613 00:28:08.613 Commands Supported and Effects 00:28:08.613 ============================== 00:28:08.613 Admin Commands 00:28:08.613 -------------- 00:28:08.613 Get Log Page (02h): Supported 00:28:08.613 Identify (06h): Supported 00:28:08.613 Abort (08h): Supported 00:28:08.613 Set Features (09h): Supported 00:28:08.613 Get Features (0Ah): Supported 00:28:08.613 Asynchronous Event Request (0Ch): Supported 00:28:08.613 Keep Alive (18h): Supported 00:28:08.613 I/O Commands 00:28:08.613 ------------ 00:28:08.613 Flush (00h): Supported 00:28:08.613 Write (01h): Supported LBA-Change 00:28:08.613 Read (02h): Supported 00:28:08.613 Write Zeroes (08h): Supported LBA-Change 00:28:08.613 Dataset Management (09h): Supported 00:28:08.613 00:28:08.613 Error Log 00:28:08.613 ========= 00:28:08.613 Entry: 0 00:28:08.613 Error Count: 0x3 00:28:08.613 Submission Queue Id: 0x0 00:28:08.613 Command Id: 0x5 00:28:08.613 Phase Bit: 0 00:28:08.613 Status Code: 0x2 00:28:08.613 Status Code Type: 0x0 00:28:08.613 Do Not Retry: 1 00:28:08.613 Error Location: 0x28 00:28:08.613 LBA: 0x0 00:28:08.613 Namespace: 0x0 00:28:08.613 Vendor Log Page: 0x0 00:28:08.613 ----------- 00:28:08.613 Entry: 1 00:28:08.613 Error Count: 0x2 00:28:08.613 Submission Queue Id: 0x0 00:28:08.613 Command Id: 0x5 00:28:08.613 Phase Bit: 0 00:28:08.613 Status Code: 0x2 00:28:08.613 Status Code Type: 0x0 00:28:08.613 Do Not Retry: 1 00:28:08.613 Error Location: 0x28 00:28:08.613 LBA: 0x0 00:28:08.613 Namespace: 0x0 00:28:08.613 Vendor Log Page: 0x0 00:28:08.613 ----------- 00:28:08.613 Entry: 2 00:28:08.613 Error Count: 0x1 00:28:08.613 Submission Queue Id: 0x0 00:28:08.613 Command Id: 0x4 00:28:08.613 Phase Bit: 0 00:28:08.613 Status Code: 0x2 00:28:08.613 Status Code Type: 0x0 00:28:08.613 Do Not Retry: 1 00:28:08.613 Error Location: 0x28 00:28:08.613 LBA: 0x0 00:28:08.613 Namespace: 0x0 00:28:08.613 Vendor Log Page: 0x0 00:28:08.613 00:28:08.613 Number of Queues 00:28:08.613 ================ 00:28:08.613 Number of I/O Submission Queues: 128 00:28:08.613 Number of I/O Completion Queues: 128 00:28:08.613 00:28:08.613 ZNS Specific Controller Data 00:28:08.613 ============================ 00:28:08.613 Zone Append Size Limit: 0 00:28:08.613 00:28:08.613 00:28:08.613 Active Namespaces 00:28:08.613 ================= 00:28:08.613 get_feature(0x05) failed 00:28:08.613 Namespace ID:1 00:28:08.613 Command Set Identifier: NVM (00h) 00:28:08.613 Deallocate: Supported 00:28:08.613 Deallocated/Unwritten Error: Not Supported 00:28:08.613 Deallocated Read Value: Unknown 00:28:08.614 Deallocate in Write Zeroes: Not Supported 00:28:08.614 Deallocated Guard Field: 0xFFFF 00:28:08.614 Flush: Supported 00:28:08.614 Reservation: Not Supported 00:28:08.614 Namespace Sharing Capabilities: Multiple Controllers 00:28:08.614 Size (in LBAs): 4194304 (2GiB) 00:28:08.614 Capacity (in LBAs): 4194304 (2GiB) 00:28:08.614 Utilization (in LBAs): 4194304 (2GiB) 00:28:08.614 UUID: 93d2430d-9fa1-4a8c-9432-d30b1f4f9eef 00:28:08.614 Thin Provisioning: Not Supported 00:28:08.614 Per-NS Atomic Units: Yes 00:28:08.614 Atomic Boundary Size (Normal): 0 00:28:08.614 Atomic Boundary Size (PFail): 0 00:28:08.614 Atomic Boundary Offset: 0 00:28:08.614 NGUID/EUI64 Never Reused: No 00:28:08.614 ANA group ID: 1 00:28:08.614 Namespace Write Protected: No 00:28:08.614 Number of LBA Formats: 1 00:28:08.614 Current LBA Format: LBA Format #00 00:28:08.614 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:08.614 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.614 rmmod nvme_tcp 00:28:08.614 rmmod nvme_fabrics 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.614 05:53:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:11.147 05:53:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:13.681 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:28:14.248 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:14.248 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:15.184 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:15.184 00:28:15.184 real 0m18.636s 00:28:15.184 user 0m5.027s 00:28:15.184 sys 0m9.961s 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.184 ************************************ 00:28:15.184 END TEST nvmf_identify_kernel_target 00:28:15.184 ************************************ 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.184 ************************************ 00:28:15.184 START TEST nvmf_auth_host 00:28:15.184 ************************************ 00:28:15.184 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:15.444 * Looking for test storage... 00:28:15.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:15.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.444 --rc genhtml_branch_coverage=1 00:28:15.444 --rc genhtml_function_coverage=1 00:28:15.444 --rc genhtml_legend=1 00:28:15.444 --rc geninfo_all_blocks=1 00:28:15.444 --rc geninfo_unexecuted_blocks=1 00:28:15.444 00:28:15.444 ' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:15.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.444 --rc genhtml_branch_coverage=1 00:28:15.444 --rc genhtml_function_coverage=1 00:28:15.444 --rc genhtml_legend=1 00:28:15.444 --rc geninfo_all_blocks=1 00:28:15.444 --rc geninfo_unexecuted_blocks=1 00:28:15.444 00:28:15.444 ' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:15.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.444 --rc genhtml_branch_coverage=1 00:28:15.444 --rc genhtml_function_coverage=1 00:28:15.444 --rc genhtml_legend=1 00:28:15.444 --rc geninfo_all_blocks=1 00:28:15.444 --rc geninfo_unexecuted_blocks=1 00:28:15.444 00:28:15.444 ' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:15.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.444 --rc genhtml_branch_coverage=1 00:28:15.444 --rc genhtml_function_coverage=1 00:28:15.444 --rc genhtml_legend=1 00:28:15.444 --rc geninfo_all_blocks=1 00:28:15.444 --rc geninfo_unexecuted_blocks=1 00:28:15.444 00:28:15.444 ' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:15.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:15.444 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.445 05:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:22.015 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:22.015 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:22.015 Found net devices under 0000:af:00.0: cvl_0_0 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.015 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:22.016 Found net devices under 0000:af:00.1: cvl_0_1 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.016 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.275 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.275 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.275 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.275 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:28:22.275 00:28:22.275 --- 10.0.0.2 ping statistics --- 00:28:22.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.275 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:22.275 05:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:22.275 00:28:22.275 --- 10.0.0.1 ping statistics --- 00:28:22.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.275 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=276330 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 276330 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 276330 ']' 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.275 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.533 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3665c32798a50b0fa8b04b0ec4a63cf5 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3Rb 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3665c32798a50b0fa8b04b0ec4a63cf5 0 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3665c32798a50b0fa8b04b0ec4a63cf5 0 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3665c32798a50b0fa8b04b0ec4a63cf5 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3Rb 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3Rb 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3Rb 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b99a72032cfb109ba3d50aec4d0a580b72267e7504f30dc1bb908f2cd0284895 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DfJ 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b99a72032cfb109ba3d50aec4d0a580b72267e7504f30dc1bb908f2cd0284895 3 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b99a72032cfb109ba3d50aec4d0a580b72267e7504f30dc1bb908f2cd0284895 3 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b99a72032cfb109ba3d50aec4d0a580b72267e7504f30dc1bb908f2cd0284895 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DfJ 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DfJ 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DfJ 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=92d25c3d58b6f79bcc18e2f774cf5b6206ca4994b3eb2992 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4lB 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 92d25c3d58b6f79bcc18e2f774cf5b6206ca4994b3eb2992 0 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 92d25c3d58b6f79bcc18e2f774cf5b6206ca4994b3eb2992 0 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=92d25c3d58b6f79bcc18e2f774cf5b6206ca4994b3eb2992 00:28:22.534 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4lB 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4lB 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4lB 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb211ff3b4a5846dc8969c6390c6cc71bf4649578436eb8f 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WeS 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb211ff3b4a5846dc8969c6390c6cc71bf4649578436eb8f 2 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb211ff3b4a5846dc8969c6390c6cc71bf4649578436eb8f 2 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb211ff3b4a5846dc8969c6390c6cc71bf4649578436eb8f 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WeS 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WeS 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WeS 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=57d7567a1b33ed626fd7819c6093920c 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pZF 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 57d7567a1b33ed626fd7819c6093920c 1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 57d7567a1b33ed626fd7819c6093920c 1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=57d7567a1b33ed626fd7819c6093920c 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pZF 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pZF 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pZF 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa477fa28565cfd3449c9d5abf6638d1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6De 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa477fa28565cfd3449c9d5abf6638d1 1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa477fa28565cfd3449c9d5abf6638d1 1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa477fa28565cfd3449c9d5abf6638d1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6De 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6De 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6De 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b8d65f08d5b022e7a453aae6ae02f1655cdaa7a0c9bcc353 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LMz 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b8d65f08d5b022e7a453aae6ae02f1655cdaa7a0c9bcc353 2 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b8d65f08d5b022e7a453aae6ae02f1655cdaa7a0c9bcc353 2 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b8d65f08d5b022e7a453aae6ae02f1655cdaa7a0c9bcc353 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:22.793 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LMz 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LMz 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LMz 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=40b1dc7592bf234a09350752acd8c994 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NQM 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 40b1dc7592bf234a09350752acd8c994 0 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 40b1dc7592bf234a09350752acd8c994 0 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=40b1dc7592bf234a09350752acd8c994 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NQM 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NQM 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.NQM 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71407a409efcbc9c39b7b3804d2fc4f53d71d3c34eb74ed154e12c07f2342593 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5Lx 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71407a409efcbc9c39b7b3804d2fc4f53d71d3c34eb74ed154e12c07f2342593 3 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71407a409efcbc9c39b7b3804d2fc4f53d71d3c34eb74ed154e12c07f2342593 3 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71407a409efcbc9c39b7b3804d2fc4f53d71d3c34eb74ed154e12c07f2342593 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5Lx 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5Lx 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5Lx 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 276330 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 276330 ']' 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.052 05:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Rb 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DfJ ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DfJ 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4lB 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WeS ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WeS 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pZF 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6De ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6De 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LMz 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.NQM ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.NQM 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5Lx 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:23.311 05:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:26.589 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:28:26.589 Waiting for block devices as requested 00:28:26.589 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:26.589 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:26.847 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:26.847 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:26.847 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:26.847 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:27.105 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:27.105 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:27.105 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:27.362 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:27.362 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:27.362 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:27.362 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:27.618 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:27.618 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:27.618 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:27.875 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:28.441 No valid GPT data, bailing 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:28:28.441 No valid GPT data, bailing 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:28.441 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:28.699 00:28:28.699 Discovery Log Number of Records 2, Generation counter 2 00:28:28.699 =====Discovery Log Entry 0====== 00:28:28.699 trtype: tcp 00:28:28.699 adrfam: ipv4 00:28:28.699 subtype: current discovery subsystem 00:28:28.699 treq: not specified, sq flow control disable supported 00:28:28.699 portid: 1 00:28:28.699 trsvcid: 4420 00:28:28.699 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:28.699 traddr: 10.0.0.1 00:28:28.699 eflags: none 00:28:28.699 sectype: none 00:28:28.699 =====Discovery Log Entry 1====== 00:28:28.699 trtype: tcp 00:28:28.699 adrfam: ipv4 00:28:28.699 subtype: nvme subsystem 00:28:28.699 treq: not specified, sq flow control disable supported 00:28:28.699 portid: 1 00:28:28.699 trsvcid: 4420 00:28:28.699 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:28.699 traddr: 10.0.0.1 00:28:28.699 eflags: none 00:28:28.699 sectype: none 00:28:28.699 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:28.699 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:28.699 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:28.699 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:28.699 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.699 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.700 nvme0n1 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.700 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.959 nvme0n1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.959 05:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.218 nvme0n1 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.218 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.477 nvme0n1 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.477 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.735 nvme0n1 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.735 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.736 nvme0n1 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.736 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.994 nvme0n1 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.994 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.252 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.252 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.252 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.253 05:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.253 nvme0n1 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.253 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.511 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 nvme0n1 00:28:30.512 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.512 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.512 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.512 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.512 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.770 nvme0n1 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.770 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 nvme0n1 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 05:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.287 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.288 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.546 nvme0n1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.546 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.804 nvme0n1 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.804 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.805 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.062 nvme0n1 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.062 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.063 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.063 05:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.063 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.320 nvme0n1 00:28:32.320 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.577 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.578 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.835 nvme0n1 00:28:32.835 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.835 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.835 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 05:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 nvme0n1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.401 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.659 nvme0n1 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.659 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.224 nvme0n1 00:28:34.224 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.224 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.224 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.224 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.224 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.225 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.225 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.225 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.225 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.225 05:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.225 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.481 nvme0n1 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.481 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.741 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.080 nvme0n1 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.080 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 05:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 nvme0n1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.708 05:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.274 nvme0n1 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.274 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.275 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.841 nvme0n1 00:28:36.841 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.841 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.841 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.841 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.841 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.099 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.100 05:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.668 nvme0n1 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.668 05:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.233 nvme0n1 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.233 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.492 nvme0n1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.492 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.751 nvme0n1 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.751 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.009 nvme0n1 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.009 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.010 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.268 nvme0n1 00:28:39.268 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.268 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.268 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.268 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.268 05:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.268 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.269 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.527 nvme0n1 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.527 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.528 nvme0n1 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.528 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.786 nvme0n1 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.786 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.045 nvme0n1 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.045 05:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.303 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.303 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.303 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.303 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.303 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.304 nvme0n1 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.304 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.562 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.563 nvme0n1 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.563 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.821 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.080 nvme0n1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.080 05:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.338 nvme0n1 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.338 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.339 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.597 nvme0n1 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:41.597 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.855 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.114 nvme0n1 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.114 05:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.373 nvme0n1 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.373 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.940 nvme0n1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.940 05:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.198 nvme0n1 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.198 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.199 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.199 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.199 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.764 nvme0n1 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.764 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.765 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.022 nvme0n1 00:28:44.022 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.280 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.280 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.280 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.280 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.280 05:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.280 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.281 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.537 nvme0n1 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.537 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.795 05:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.361 nvme0n1 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.361 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.362 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.928 nvme0n1 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.928 05:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.494 nvme0n1 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.494 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.752 05:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.319 nvme0n1 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.319 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.320 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.886 nvme0n1 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.886 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.145 nvme0n1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.145 05:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.403 nvme0n1 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.403 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.404 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.662 nvme0n1 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.662 nvme0n1 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.662 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.921 nvme0n1 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.921 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.180 05:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.180 nvme0n1 00:28:49.180 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.180 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.180 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.180 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.180 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.180 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.181 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.181 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.181 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.181 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.439 nvme0n1 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.439 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 nvme0n1 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.698 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.956 nvme0n1 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.956 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.215 05:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.215 nvme0n1 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.215 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:50.216 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.474 nvme0n1 00:28:50.474 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.735 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.735 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.735 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.735 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.736 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.994 nvme0n1 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:50.994 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.995 05:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.253 nvme0n1 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.253 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 nvme0n1 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.770 nvme0n1 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.770 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:52.028 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.029 05:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.287 nvme0n1 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.287 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.546 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.804 nvme0n1 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.804 05:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.370 nvme0n1 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.370 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.627 nvme0n1 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.627 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.885 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.143 nvme0n1 00:28:54.143 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.143 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.143 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.143 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.143 05:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NWMzMjc5OGE1MGIwZmE4YjA0YjBlYzRhNjNjZjUOFmRw: 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjk5YTcyMDMyY2ZiMTA5YmEzZDUwYWVjNGQwYTU4MGI3MjI2N2U3NTA0ZjMwZGMxYmI5MDhmMmNkMDI4NDg5NYQ11wU=: 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.143 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.708 nvme0n1 00:28:54.709 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.709 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.709 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.709 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.709 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.709 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.967 05:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 nvme0n1 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.532 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.533 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.098 nvme0n1 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.098 05:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhkNjVmMDhkNWIwMjJlN2E0NTNhYWU2YWUwMmYxNjU1Y2RhYTdhMGM5YmNjMzUzGdpZUw==: 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: ]] 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBiMWRjNzU5MmJmMjM0YTA5MzUwNzUyYWNkOGM5OTRSQfzU: 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.098 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.663 nvme0n1 00:28:56.663 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.663 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.663 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.663 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.663 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.664 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE0MDdhNDA5ZWZjYmM5YzM5YjdiMzgwNGQyZmM0ZjUzZDcxZDNjMzRlYjc0ZWQxNTRlMTJjMDdmMjM0MjU5M0Q7S6M=: 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.921 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.922 05:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 nvme0n1 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 request: 00:28:57.487 { 00:28:57.487 "name": "nvme0", 00:28:57.487 "trtype": "tcp", 00:28:57.487 "traddr": "10.0.0.1", 00:28:57.487 "adrfam": "ipv4", 00:28:57.487 "trsvcid": "4420", 00:28:57.487 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:57.487 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:57.487 "prchk_reftag": false, 00:28:57.487 "prchk_guard": false, 00:28:57.487 "hdgst": false, 00:28:57.487 "ddgst": false, 00:28:57.487 "allow_unrecognized_csi": false, 00:28:57.487 "method": "bdev_nvme_attach_controller", 00:28:57.487 "req_id": 1 00:28:57.487 } 00:28:57.487 Got JSON-RPC error response 00:28:57.487 response: 00:28:57.487 { 00:28:57.487 "code": -5, 00:28:57.487 "message": "Input/output error" 00:28:57.487 } 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.487 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.745 request: 00:28:57.745 { 00:28:57.745 "name": "nvme0", 00:28:57.745 "trtype": "tcp", 00:28:57.745 "traddr": "10.0.0.1", 00:28:57.745 "adrfam": "ipv4", 00:28:57.745 "trsvcid": "4420", 00:28:57.745 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:57.745 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:57.745 "prchk_reftag": false, 00:28:57.745 "prchk_guard": false, 00:28:57.745 "hdgst": false, 00:28:57.745 "ddgst": false, 00:28:57.745 "dhchap_key": "key2", 00:28:57.745 "allow_unrecognized_csi": false, 00:28:57.745 "method": "bdev_nvme_attach_controller", 00:28:57.745 "req_id": 1 00:28:57.745 } 00:28:57.745 Got JSON-RPC error response 00:28:57.745 response: 00:28:57.745 { 00:28:57.745 "code": -5, 00:28:57.745 "message": "Input/output error" 00:28:57.745 } 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.745 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.746 request: 00:28:57.746 { 00:28:57.746 "name": "nvme0", 00:28:57.746 "trtype": "tcp", 00:28:57.746 "traddr": "10.0.0.1", 00:28:57.746 "adrfam": "ipv4", 00:28:57.746 "trsvcid": "4420", 00:28:57.746 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:57.746 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:57.746 "prchk_reftag": false, 00:28:57.746 "prchk_guard": false, 00:28:57.746 "hdgst": false, 00:28:57.746 "ddgst": false, 00:28:57.746 "dhchap_key": "key1", 00:28:57.746 "dhchap_ctrlr_key": "ckey2", 00:28:57.746 "allow_unrecognized_csi": false, 00:28:57.746 "method": "bdev_nvme_attach_controller", 00:28:57.746 "req_id": 1 00:28:57.746 } 00:28:57.746 Got JSON-RPC error response 00:28:57.746 response: 00:28:57.746 { 00:28:57.746 "code": -5, 00:28:57.746 "message": "Input/output error" 00:28:57.746 } 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.746 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.004 nvme0n1 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.004 request: 00:28:58.004 { 00:28:58.004 "name": "nvme0", 00:28:58.004 "dhchap_key": "key1", 00:28:58.004 "dhchap_ctrlr_key": "ckey2", 00:28:58.004 "method": "bdev_nvme_set_keys", 00:28:58.004 "req_id": 1 00:28:58.004 } 00:28:58.004 Got JSON-RPC error response 00:28:58.004 response: 00:28:58.004 { 00:28:58.004 "code": -13, 00:28:58.004 "message": "Permission denied" 00:28:58.004 } 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.004 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.262 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.262 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:58.262 05:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:59.195 05:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.195 05:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:59.195 05:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.195 05:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.195 05:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.195 05:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:59.195 05:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJkMjVjM2Q1OGI2Zjc5YmNjMThlMmY3NzRjZjViNjIwNmNhNDk5NGIzZWIyOTkypJQTUA==: 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: ]] 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIyMTFmZjNiNGE1ODQ2ZGM4OTY5YzYzOTBjNmNjNzFiZjQ2NDk1Nzg0MzZlYjhmqICtBQ==: 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.133 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.394 nvme0n1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkNzU2N2ExYjMzZWQ2MjZmZDc4MTljNjA5MzkyMGOXLW8Z: 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0NzdmYTI4NTY1Y2ZkMzQ0OWM5ZDVhYmY2NjM4ZDHE9xVi: 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.394 request: 00:29:00.394 { 00:29:00.394 "name": "nvme0", 00:29:00.394 "dhchap_key": "key2", 00:29:00.394 "dhchap_ctrlr_key": "ckey1", 00:29:00.394 "method": "bdev_nvme_set_keys", 00:29:00.394 "req_id": 1 00:29:00.394 } 00:29:00.394 Got JSON-RPC error response 00:29:00.394 response: 00:29:00.394 { 00:29:00.394 "code": -13, 00:29:00.394 "message": "Permission denied" 00:29:00.394 } 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.394 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.653 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:00.653 05:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.587 rmmod nvme_tcp 00:29:01.587 rmmod nvme_fabrics 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 276330 ']' 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 276330 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 276330 ']' 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 276330 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276330 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276330' 00:29:01.587 killing process with pid 276330 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 276330 00:29:01.587 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 276330 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.846 05:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:04.382 05:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:06.917 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:29:07.181 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:07.181 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:07.181 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:07.181 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:07.440 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:08.377 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:08.377 05:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3Rb /tmp/spdk.key-null.4lB /tmp/spdk.key-sha256.pZF /tmp/spdk.key-sha384.LMz /tmp/spdk.key-sha512.5Lx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:08.377 05:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:11.668 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:29:11.668 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:11.668 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:11.668 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:11.668 00:29:11.668 real 0m56.439s 00:29:11.668 user 0m50.333s 00:29:11.668 sys 0m14.182s 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.668 ************************************ 00:29:11.668 END TEST nvmf_auth_host 00:29:11.668 ************************************ 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.668 05:54:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.928 ************************************ 00:29:11.928 START TEST nvmf_digest 00:29:11.928 ************************************ 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:11.928 * Looking for test storage... 00:29:11.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.928 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:11.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.928 --rc genhtml_branch_coverage=1 00:29:11.928 --rc genhtml_function_coverage=1 00:29:11.928 --rc genhtml_legend=1 00:29:11.928 --rc geninfo_all_blocks=1 00:29:11.928 --rc geninfo_unexecuted_blocks=1 00:29:11.928 00:29:11.928 ' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.929 --rc genhtml_branch_coverage=1 00:29:11.929 --rc genhtml_function_coverage=1 00:29:11.929 --rc genhtml_legend=1 00:29:11.929 --rc geninfo_all_blocks=1 00:29:11.929 --rc geninfo_unexecuted_blocks=1 00:29:11.929 00:29:11.929 ' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.929 --rc genhtml_branch_coverage=1 00:29:11.929 --rc genhtml_function_coverage=1 00:29:11.929 --rc genhtml_legend=1 00:29:11.929 --rc geninfo_all_blocks=1 00:29:11.929 --rc geninfo_unexecuted_blocks=1 00:29:11.929 00:29:11.929 ' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.929 --rc genhtml_branch_coverage=1 00:29:11.929 --rc genhtml_function_coverage=1 00:29:11.929 --rc genhtml_legend=1 00:29:11.929 --rc geninfo_all_blocks=1 00:29:11.929 --rc geninfo_unexecuted_blocks=1 00:29:11.929 00:29:11.929 ' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.929 05:54:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:18.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:18.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:18.501 Found net devices under 0000:af:00.0: cvl_0_0 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:18.501 Found net devices under 0000:af:00.1: cvl_0_1 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.501 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.502 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:29:18.761 00:29:18.761 --- 10.0.0.2 ping statistics --- 00:29:18.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.761 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:18.761 00:29:18.761 --- 10.0.0.1 ping statistics --- 00:29:18.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.761 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:18.761 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:18.762 ************************************ 00:29:18.762 START TEST nvmf_digest_clean 00:29:18.762 ************************************ 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=291682 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 291682 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 291682 ']' 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.762 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:18.762 [2024-12-10 05:54:36.688926] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:18.762 [2024-12-10 05:54:36.688973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.020 [2024-12-10 05:54:36.775254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.020 [2024-12-10 05:54:36.814289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.020 [2024-12-10 05:54:36.814322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.020 [2024-12-10 05:54:36.814330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.020 [2024-12-10 05:54:36.814336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.020 [2024-12-10 05:54:36.814341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.020 [2024-12-10 05:54:36.814858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.020 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.020 null0 00:29:19.020 [2024-12-10 05:54:36.962720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.278 [2024-12-10 05:54:36.986903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=291809 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 291809 /var/tmp/bperf.sock 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 291809 ']' 00:29:19.278 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.279 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.279 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.279 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.279 05:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.279 [2024-12-10 05:54:37.037414] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:19.279 [2024-12-10 05:54:37.037454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291809 ] 00:29:19.279 [2024-12-10 05:54:37.113977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.279 [2024-12-10 05:54:37.153536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.279 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.279 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:19.279 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:19.279 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:19.279 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.579 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.579 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.905 nvme0n1 00:29:19.905 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:19.905 05:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.164 Running I/O for 2 seconds... 00:29:22.031 25007.00 IOPS, 97.68 MiB/s [2024-12-10T04:54:39.990Z] 25406.00 IOPS, 99.24 MiB/s 00:29:22.031 Latency(us) 00:29:22.031 [2024-12-10T04:54:39.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.031 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:22.031 nvme0n1 : 2.05 24908.46 97.30 0.00 0.00 5031.11 2527.82 45188.63 00:29:22.031 [2024-12-10T04:54:39.990Z] =================================================================================================================== 00:29:22.031 [2024-12-10T04:54:39.990Z] Total : 24908.46 97.30 0.00 0.00 5031.11 2527.82 45188.63 00:29:22.031 { 00:29:22.031 "results": [ 00:29:22.031 { 00:29:22.031 "job": "nvme0n1", 00:29:22.031 "core_mask": "0x2", 00:29:22.031 "workload": "randread", 00:29:22.031 "status": "finished", 00:29:22.031 "queue_depth": 128, 00:29:22.031 "io_size": 4096, 00:29:22.031 "runtime": 2.045088, 00:29:22.031 "iops": 24908.463596676524, 00:29:22.031 "mibps": 97.29868592451767, 00:29:22.031 "io_failed": 0, 00:29:22.031 "io_timeout": 0, 00:29:22.031 "avg_latency_us": 5031.114494961394, 00:29:22.031 "min_latency_us": 2527.8171428571427, 00:29:22.031 "max_latency_us": 45188.63238095238 00:29:22.031 } 00:29:22.031 ], 00:29:22.031 "core_count": 1 00:29:22.031 } 00:29:22.031 05:54:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:22.031 05:54:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:22.031 05:54:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:22.031 05:54:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:22.031 | select(.opcode=="crc32c") 00:29:22.031 | "\(.module_name) \(.executed)"' 00:29:22.031 05:54:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 291809 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 291809 ']' 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 291809 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291809 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291809' 00:29:22.289 killing process with pid 291809 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 291809 00:29:22.289 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.289 00:29:22.289 Latency(us) 00:29:22.289 [2024-12-10T04:54:40.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.289 [2024-12-10T04:54:40.248Z] =================================================================================================================== 00:29:22.289 [2024-12-10T04:54:40.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.289 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 291809 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=292284 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 292284 /var/tmp/bperf.sock 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 292284 ']' 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.547 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:22.547 [2024-12-10 05:54:40.425300] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:22.547 [2024-12-10 05:54:40.425348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292284 ] 00:29:22.547 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.547 Zero copy mechanism will not be used. 00:29:22.805 [2024-12-10 05:54:40.502497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.805 [2024-12-10 05:54:40.541573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.805 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.805 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:22.805 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:22.805 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:22.805 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:23.063 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.063 05:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.321 nvme0n1 00:29:23.321 05:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:23.321 05:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.321 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.321 Zero copy mechanism will not be used. 00:29:23.321 Running I/O for 2 seconds... 00:29:25.628 6309.00 IOPS, 788.62 MiB/s [2024-12-10T04:54:43.587Z] 6247.50 IOPS, 780.94 MiB/s 00:29:25.628 Latency(us) 00:29:25.628 [2024-12-10T04:54:43.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.628 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:25.628 nvme0n1 : 2.00 6250.52 781.31 0.00 0.00 2557.20 631.95 8613.30 00:29:25.628 [2024-12-10T04:54:43.587Z] =================================================================================================================== 00:29:25.628 [2024-12-10T04:54:43.587Z] Total : 6250.52 781.31 0.00 0.00 2557.20 631.95 8613.30 00:29:25.628 { 00:29:25.628 "results": [ 00:29:25.628 { 00:29:25.628 "job": "nvme0n1", 00:29:25.628 "core_mask": "0x2", 00:29:25.628 "workload": "randread", 00:29:25.628 "status": "finished", 00:29:25.628 "queue_depth": 16, 00:29:25.628 "io_size": 131072, 00:29:25.628 "runtime": 2.001595, 00:29:25.628 "iops": 6250.515214116742, 00:29:25.628 "mibps": 781.3144017645927, 00:29:25.628 "io_failed": 0, 00:29:25.628 "io_timeout": 0, 00:29:25.628 "avg_latency_us": 2557.1971356254116, 00:29:25.628 "min_latency_us": 631.9542857142857, 00:29:25.628 "max_latency_us": 8613.302857142857 00:29:25.628 } 00:29:25.628 ], 00:29:25.628 "core_count": 1 00:29:25.628 } 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:25.628 | select(.opcode=="crc32c") 00:29:25.628 | "\(.module_name) \(.executed)"' 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 292284 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 292284 ']' 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 292284 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292284 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292284' 00:29:25.628 killing process with pid 292284 00:29:25.628 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 292284 00:29:25.628 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.628 00:29:25.628 Latency(us) 00:29:25.628 [2024-12-10T04:54:43.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.629 [2024-12-10T04:54:43.588Z] =================================================================================================================== 00:29:25.629 [2024-12-10T04:54:43.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.629 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 292284 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=292804 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 292804 /var/tmp/bperf.sock 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 292804 ']' 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:25.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.887 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:25.887 [2024-12-10 05:54:43.700652] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:25.887 [2024-12-10 05:54:43.700705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292804 ] 00:29:25.887 [2024-12-10 05:54:43.780011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.887 [2024-12-10 05:54:43.819382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.145 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.145 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:26.145 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:26.145 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:26.145 05:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:26.403 05:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.403 05:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.661 nvme0n1 00:29:26.661 05:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:26.661 05:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.661 Running I/O for 2 seconds... 00:29:28.966 28626.00 IOPS, 111.82 MiB/s [2024-12-10T04:54:46.925Z] 28710.50 IOPS, 112.15 MiB/s 00:29:28.966 Latency(us) 00:29:28.966 [2024-12-10T04:54:46.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.966 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.966 nvme0n1 : 2.00 28727.11 112.22 0.00 0.00 4451.29 2215.74 9175.04 00:29:28.966 [2024-12-10T04:54:46.925Z] =================================================================================================================== 00:29:28.966 [2024-12-10T04:54:46.925Z] Total : 28727.11 112.22 0.00 0.00 4451.29 2215.74 9175.04 00:29:28.966 { 00:29:28.966 "results": [ 00:29:28.966 { 00:29:28.966 "job": "nvme0n1", 00:29:28.966 "core_mask": "0x2", 00:29:28.966 "workload": "randwrite", 00:29:28.966 "status": "finished", 00:29:28.966 "queue_depth": 128, 00:29:28.966 "io_size": 4096, 00:29:28.966 "runtime": 2.003299, 00:29:28.966 "iops": 28727.11462442701, 00:29:28.966 "mibps": 112.215291501668, 00:29:28.966 "io_failed": 0, 00:29:28.966 "io_timeout": 0, 00:29:28.966 "avg_latency_us": 4451.289326180836, 00:29:28.966 "min_latency_us": 2215.7409523809524, 00:29:28.966 "max_latency_us": 9175.04 00:29:28.966 } 00:29:28.966 ], 00:29:28.966 "core_count": 1 00:29:28.966 } 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:28.966 | select(.opcode=="crc32c") 00:29:28.966 | "\(.module_name) \(.executed)"' 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 292804 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 292804 ']' 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 292804 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292804 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292804' 00:29:28.966 killing process with pid 292804 00:29:28.966 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 292804 00:29:28.966 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.966 00:29:28.966 Latency(us) 00:29:28.966 [2024-12-10T04:54:46.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.966 [2024-12-10T04:54:46.925Z] =================================================================================================================== 00:29:28.966 [2024-12-10T04:54:46.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.967 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 292804 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=293431 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 293431 /var/tmp/bperf.sock 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 293431 ']' 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.225 05:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:29.225 [2024-12-10 05:54:47.042789] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:29.225 [2024-12-10 05:54:47.042839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293431 ] 00:29:29.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.225 Zero copy mechanism will not be used. 00:29:29.225 [2024-12-10 05:54:47.122163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.225 [2024-12-10 05:54:47.160469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.483 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.483 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:29.483 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:29.483 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:29.483 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:29.741 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.741 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.999 nvme0n1 00:29:29.999 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:29.999 05:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.999 Zero copy mechanism will not be used. 00:29:29.999 Running I/O for 2 seconds... 00:29:32.307 6415.00 IOPS, 801.88 MiB/s [2024-12-10T04:54:50.266Z] 6471.50 IOPS, 808.94 MiB/s 00:29:32.307 Latency(us) 00:29:32.307 [2024-12-10T04:54:50.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.307 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:32.307 nvme0n1 : 2.00 6469.16 808.65 0.00 0.00 2469.22 1919.27 12420.63 00:29:32.307 [2024-12-10T04:54:50.266Z] =================================================================================================================== 00:29:32.307 [2024-12-10T04:54:50.266Z] Total : 6469.16 808.65 0.00 0.00 2469.22 1919.27 12420.63 00:29:32.307 { 00:29:32.307 "results": [ 00:29:32.307 { 00:29:32.307 "job": "nvme0n1", 00:29:32.307 "core_mask": "0x2", 00:29:32.307 "workload": "randwrite", 00:29:32.307 "status": "finished", 00:29:32.307 "queue_depth": 16, 00:29:32.307 "io_size": 131072, 00:29:32.307 "runtime": 2.003505, 00:29:32.307 "iops": 6469.162792206658, 00:29:32.307 "mibps": 808.6453490258323, 00:29:32.307 "io_failed": 0, 00:29:32.307 "io_timeout": 0, 00:29:32.307 "avg_latency_us": 2469.2209916195475, 00:29:32.307 "min_latency_us": 1919.2685714285715, 00:29:32.307 "max_latency_us": 12420.63238095238 00:29:32.307 } 00:29:32.307 ], 00:29:32.307 "core_count": 1 00:29:32.307 } 00:29:32.307 05:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:32.307 05:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:32.307 05:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:32.307 05:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:32.307 | select(.opcode=="crc32c") 00:29:32.307 | "\(.module_name) \(.executed)"' 00:29:32.307 05:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 293431 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 293431 ']' 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 293431 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 293431 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 293431' 00:29:32.307 killing process with pid 293431 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 293431 00:29:32.307 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.307 00:29:32.307 Latency(us) 00:29:32.307 [2024-12-10T04:54:50.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.307 [2024-12-10T04:54:50.266Z] =================================================================================================================== 00:29:32.307 [2024-12-10T04:54:50.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.307 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 293431 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 291682 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 291682 ']' 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 291682 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291682 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291682' 00:29:32.566 killing process with pid 291682 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 291682 00:29:32.566 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 291682 00:29:32.824 00:29:32.824 real 0m13.923s 00:29:32.824 user 0m26.691s 00:29:32.824 sys 0m4.645s 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:32.824 ************************************ 00:29:32.824 END TEST nvmf_digest_clean 00:29:32.824 ************************************ 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:32.824 ************************************ 00:29:32.824 START TEST nvmf_digest_error 00:29:32.824 ************************************ 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=293924 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 293924 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 293924 ']' 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.824 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.825 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.825 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.825 05:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.825 [2024-12-10 05:54:50.683266] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:32.825 [2024-12-10 05:54:50.683307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.825 [2024-12-10 05:54:50.768808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.083 [2024-12-10 05:54:50.807781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.083 [2024-12-10 05:54:50.807817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.083 [2024-12-10 05:54:50.807824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.083 [2024-12-10 05:54:50.807830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.083 [2024-12-10 05:54:50.807835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.083 [2024-12-10 05:54:50.808376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.648 [2024-12-10 05:54:51.550527] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.648 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.906 null0 00:29:33.906 [2024-12-10 05:54:51.647108] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.906 [2024-12-10 05:54:51.671299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=294163 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 294163 /var/tmp/bperf.sock 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 294163 ']' 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.906 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.906 [2024-12-10 05:54:51.724532] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:33.906 [2024-12-10 05:54:51.724571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294163 ] 00:29:33.906 [2024-12-10 05:54:51.802696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.906 [2024-12-10 05:54:51.841848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.163 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.163 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:34.163 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.163 05:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.420 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.420 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.420 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.420 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.420 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.420 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.678 nvme0n1 00:29:34.678 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:34.678 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.678 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.678 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.678 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.678 05:54:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.678 Running I/O for 2 seconds... 00:29:34.936 [2024-12-10 05:54:52.643053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.643082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.655454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.655479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.655488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.664775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.664799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.664808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.674169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.674189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.674197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.682552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.682572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.682580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.692242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.692262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.692274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.701868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.701888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.701897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.711008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.711028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.711036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.720360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.720380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.720389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.729135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.729155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.729163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.738977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.738996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.739004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.748236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.748256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.748265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.757830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.757850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.757858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.936 [2024-12-10 05:54:52.766817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.936 [2024-12-10 05:54:52.766836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.936 [2024-12-10 05:54:52.766844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.775017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.775039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.785089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.785109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.785116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.794829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.794849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.794857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.803550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.803570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.803578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.813888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.813908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.813916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.822212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.822236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.822244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.833263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.833283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.833291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.842842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.842862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.842870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.852313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.852333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.852341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.861529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.861548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.861557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.870021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.870041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.870049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.937 [2024-12-10 05:54:52.881576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:34.937 [2024-12-10 05:54:52.881596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.937 [2024-12-10 05:54:52.881604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.890353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.890376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.890385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.901703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.901725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.901734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.910078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.910099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.910107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.922278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.922298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.922306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.934367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.934387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.934396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.946118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.946138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.946150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.956141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.956159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.956167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.965275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.965294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.965302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.973772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.973792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.982882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.982901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:52.991785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:52.991804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:52.991812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:53.000763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:53.000782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:53.000790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:53.010053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:53.010072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:53.010080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:53.020931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:53.020951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:53.020962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:53.031264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:53.031291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.195 [2024-12-10 05:54:53.031300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.195 [2024-12-10 05:54:53.039752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.195 [2024-12-10 05:54:53.039772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.039780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.048107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.048127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.048135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.059099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.059118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.059126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.067371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.067391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.067399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.077676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.077696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.077705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.088761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.088780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.088789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.096979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.096999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.097007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.109085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.109105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.109113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.117215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.117240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.117248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.128441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.128468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.128476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.196 [2024-12-10 05:54:53.139351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.196 [2024-12-10 05:54:53.139371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.196 [2024-12-10 05:54:53.139379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.149043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.149067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.149076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.158858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.158879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.158888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.168068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.168089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.168098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.177300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.177320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.177328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.186328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.186348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.186356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.199546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.199566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.199578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.207336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.207356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.207364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.219027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.219048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.219056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.227104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.227123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.227131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.237420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.237440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.237448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.246846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.246864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.246872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.256103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.256122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.265905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.265924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.265932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.274700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.274719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.274727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.285414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.285438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.285457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.296324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.296343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.296351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.304488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.304508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.304516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.316707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.316727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.316735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.327387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.327406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.454 [2024-12-10 05:54:53.327414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.454 [2024-12-10 05:54:53.336111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.454 [2024-12-10 05:54:53.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.336139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.346386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.346405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.346414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.355075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.355095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.355103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.365171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.365198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.374341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.374361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.374369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.383562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.383583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.383591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.393093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.393113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.393121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.455 [2024-12-10 05:54:53.401518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.455 [2024-12-10 05:54:53.401537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.455 [2024-12-10 05:54:53.401544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.414558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.414581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.414590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.425935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.425955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.425964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.435572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.435593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.435601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.444583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.444602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.444610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.456632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.456653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.456664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.466821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.466842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.466850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.475151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.475170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.475178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.485756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.485776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.485783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.497730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.497750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.509868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.509887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.509894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.521106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.521127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.521135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.529795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.529816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.529824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.541385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.541406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.541414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.550425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.550445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.559102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.559123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.559133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.570860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.570882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.570890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.580500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.580521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.580529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.589419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.589439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.589447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.601736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.601757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.601766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.609832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.609852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.609861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.620838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.620858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.620866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 25706.00 IOPS, 100.41 MiB/s [2024-12-10T04:54:53.672Z] [2024-12-10 05:54:53.631649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.713 [2024-12-10 05:54:53.631669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.713 [2024-12-10 05:54:53.631681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.713 [2024-12-10 05:54:53.643619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.714 [2024-12-10 05:54:53.643639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.714 [2024-12-10 05:54:53.643647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.714 [2024-12-10 05:54:53.652400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.714 [2024-12-10 05:54:53.652420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.714 [2024-12-10 05:54:53.652427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.714 [2024-12-10 05:54:53.662156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.714 [2024-12-10 05:54:53.662176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.714 [2024-12-10 05:54:53.662184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.674016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.674038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.674047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.684937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.684958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.684967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.693317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.693339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.693347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.703405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.703427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.703436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.711618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.711638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.711646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.722851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.722875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.722883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.731264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.731285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.731293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.743234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.743255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.743263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.755228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.755250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.755258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.764986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.765007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.765015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.773778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.773798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.773807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.786036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.786058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.786066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.793711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.793731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.793739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.804635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.804656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.804664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.815051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.815071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.815079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.823102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.823122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.823130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.832857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.832877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.832885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.842359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.842379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.842387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.851561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.851581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.851589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.861569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.861588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.861596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.869632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.869651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.869659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.879539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.879559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.889095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.889115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.889126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.899599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.899619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.899627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.907809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.907828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.972 [2024-12-10 05:54:53.907835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.972 [2024-12-10 05:54:53.919658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:35.972 [2024-12-10 05:54:53.919679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.973 [2024-12-10 05:54:53.919687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.230 [2024-12-10 05:54:53.928348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.230 [2024-12-10 05:54:53.928376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.230 [2024-12-10 05:54:53.928389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.230 [2024-12-10 05:54:53.940555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.230 [2024-12-10 05:54:53.940579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:53.940588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:53.951616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:53.951635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:53.951644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:53.962134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:53.962155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:53.962163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:53.970386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:53.970406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:53.970413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:53.981858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:53.981878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:53.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:53.994320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:53.994340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:53.994349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.006516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.006536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.006544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.017144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.017164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.017173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.025626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.025645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.025653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.037727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.037747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.037755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.050039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.050063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.050071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.058608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.058629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.058637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.068017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.068037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.068049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.078207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.078233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.078241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.089597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.089616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.089624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.098523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.098542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.098550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.108224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.108246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.108254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.117178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.117197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.117205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.127167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.127194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.136192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.136212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.136226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.144949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.144968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.144976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.154756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.154779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.154788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.164513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.164532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.164540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.174305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.174326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.174334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.231 [2024-12-10 05:54:54.183070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.231 [2024-12-10 05:54:54.183093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.231 [2024-12-10 05:54:54.183102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.489 [2024-12-10 05:54:54.192824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.489 [2024-12-10 05:54:54.192846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.489 [2024-12-10 05:54:54.192855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.489 [2024-12-10 05:54:54.202634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.489 [2024-12-10 05:54:54.202654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.489 [2024-12-10 05:54:54.202662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.489 [2024-12-10 05:54:54.211433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.489 [2024-12-10 05:54:54.211453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.489 [2024-12-10 05:54:54.211462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.489 [2024-12-10 05:54:54.221176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.489 [2024-12-10 05:54:54.221196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.489 [2024-12-10 05:54:54.221204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.489 [2024-12-10 05:54:54.230021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.489 [2024-12-10 05:54:54.230040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.489 [2024-12-10 05:54:54.230048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.489 [2024-12-10 05:54:54.239602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.489 [2024-12-10 05:54:54.239622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.239629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.248273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.248293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.248301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.259023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.259042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.259050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.268042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.268061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.268069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.276013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.276032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.276040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.286194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.286212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.286225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.296241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.296260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.296268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.305798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.305817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.305825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.315753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.315773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.315786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.326261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.326281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.326289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.339455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.339475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.339483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.350400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.350420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.350428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.359005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.359025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.359032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.369229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.369258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.378960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.378980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.378987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.387680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.387700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.387708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.397123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.397144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.397152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.407207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.407235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.407243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.415688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.415708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.415716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.424806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.424825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.424834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.490 [2024-12-10 05:54:54.435757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.490 [2024-12-10 05:54:54.435776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.490 [2024-12-10 05:54:54.435784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.444316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.444338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.444347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.456849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.456871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.456879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.467041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.467061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.467069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.476163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.476183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.476191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.485403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.485422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.485429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.495002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.495022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.495031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.504573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.504592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.504600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.513675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.513694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.513702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.525766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.525786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.525794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.534103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.534122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.534130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.545122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.545142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.545150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.556288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.556308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.556316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.564644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.564662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.564670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.576454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.576474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.576485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.585182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.585202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.585209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.596876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.748 [2024-12-10 05:54:54.596896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.748 [2024-12-10 05:54:54.596904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.748 [2024-12-10 05:54:54.607499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.749 [2024-12-10 05:54:54.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.749 [2024-12-10 05:54:54.607528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.749 [2024-12-10 05:54:54.619582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.749 [2024-12-10 05:54:54.619601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.749 [2024-12-10 05:54:54.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.749 [2024-12-10 05:54:54.627533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ad890) 00:29:36.749 [2024-12-10 05:54:54.627552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.749 [2024-12-10 05:54:54.627559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.749 25666.00 IOPS, 100.26 MiB/s 00:29:36.749 Latency(us) 00:29:36.749 [2024-12-10T04:54:54.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.749 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:36.749 nvme0n1 : 2.00 25684.04 100.33 0.00 0.00 4978.83 2481.01 17975.59 00:29:36.749 [2024-12-10T04:54:54.708Z] =================================================================================================================== 00:29:36.749 [2024-12-10T04:54:54.708Z] Total : 25684.04 100.33 0.00 0.00 4978.83 2481.01 17975.59 00:29:36.749 { 00:29:36.749 "results": [ 00:29:36.749 { 00:29:36.749 "job": "nvme0n1", 00:29:36.749 "core_mask": "0x2", 00:29:36.749 "workload": "randread", 00:29:36.749 "status": "finished", 00:29:36.749 "queue_depth": 128, 00:29:36.749 "io_size": 4096, 00:29:36.749 "runtime": 2.003579, 00:29:36.749 "iops": 25684.038413259474, 00:29:36.749 "mibps": 100.32827505179482, 00:29:36.749 "io_failed": 0, 00:29:36.749 "io_timeout": 0, 00:29:36.749 "avg_latency_us": 4978.82888457054, 00:29:36.749 "min_latency_us": 2481.0057142857145, 00:29:36.749 "max_latency_us": 17975.588571428572 00:29:36.749 } 00:29:36.749 ], 00:29:36.749 "core_count": 1 00:29:36.749 } 00:29:36.749 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.749 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.749 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.749 | .driver_specific 00:29:36.749 | .nvme_error 00:29:36.749 | .status_code 00:29:36.749 | .command_transient_transport_error' 00:29:36.749 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 294163 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 294163 ']' 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 294163 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294163 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294163' 00:29:37.007 killing process with pid 294163 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 294163 00:29:37.007 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.007 00:29:37.007 Latency(us) 00:29:37.007 [2024-12-10T04:54:54.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.007 [2024-12-10T04:54:54.966Z] =================================================================================================================== 00:29:37.007 [2024-12-10T04:54:54.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.007 05:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 294163 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=294682 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 294682 /var/tmp/bperf.sock 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:37.263 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 294682 ']' 00:29:37.264 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:37.264 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.264 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:37.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:37.264 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.264 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.264 [2024-12-10 05:54:55.108319] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:37.264 [2024-12-10 05:54:55.108367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294682 ] 00:29:37.264 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.264 Zero copy mechanism will not be used. 00:29:37.264 [2024-12-10 05:54:55.187115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.520 [2024-12-10 05:54:55.228281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.520 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.520 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:37.520 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.520 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.776 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:37.776 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.776 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.776 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.776 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.776 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.034 nvme0n1 00:29:38.034 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:38.034 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.034 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.034 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.034 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.034 05:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.034 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.034 Zero copy mechanism will not be used. 00:29:38.034 Running I/O for 2 seconds... 00:29:38.034 [2024-12-10 05:54:55.900081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.900113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.900124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.905971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.905995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.906004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.913184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.913212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.913227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.920482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.920505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.920513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.926776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.926798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.926806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.932829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.932851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.932859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.939037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.939059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.939067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.944559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.944580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.949911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.949932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.949940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.955201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.955228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.955236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.960468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.960489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.960501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.965799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.965820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.965828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.971128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.971149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.971156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.976445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.976465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.976473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.034 [2024-12-10 05:54:55.981749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.034 [2024-12-10 05:54:55.981770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.034 [2024-12-10 05:54:55.981777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:55.987232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:55.987259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:55.987274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:55.992917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:55.992941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:55.992950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:55.998299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:55.998321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:55.998329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.003770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.003791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.003800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.009298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.009323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.009331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.014699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.014720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.014728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.020263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.020285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.020293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.025564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.025585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.025593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.030989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.031009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.031017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.036778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.036799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.036806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.042041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.042062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.042069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.047188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.047216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.052614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.052635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.052643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.058260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.058281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.058288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.293 [2024-12-10 05:54:56.063635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.293 [2024-12-10 05:54:56.063656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.293 [2024-12-10 05:54:56.063663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.068943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.068964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.068972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.074288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.074309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.079671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.079694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.079702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.084852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.084872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.090099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.090120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.090128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.095412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.095433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.095440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.100817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.100838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.100850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.106016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.106037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.106044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.111435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.111457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.111465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.116676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.116706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.121813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.121842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.127025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.127045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.127053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.132324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.132344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.132352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.137583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.137604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.137611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.142872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.142892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.142900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.148170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.148191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.153559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.153580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.153588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.158949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.158981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.158988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.164371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.164393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.164402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.169834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.169854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.169861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.175186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.175206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.175213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.180561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.180582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.180590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.186108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.186129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.186137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.191437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.191457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.191469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.196755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.196776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.196783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.294 [2024-12-10 05:54:56.201928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.294 [2024-12-10 05:54:56.201947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.294 [2024-12-10 05:54:56.201955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.207127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.207148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.207156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.212363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.212385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.212393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.217556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.217576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.217584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.222780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.222800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.222808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.228000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.228020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.228029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.233267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.233287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.233295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.238658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.238682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.238690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.295 [2024-12-10 05:54:56.244103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.295 [2024-12-10 05:54:56.244127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.295 [2024-12-10 05:54:56.244136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.249638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.249662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.249671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.255146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.255169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.255178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.260566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.260596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.265842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.265863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.265871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.271183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.271203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.271211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.276559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.276580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.276588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.281984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.282005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.282013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.287451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.287474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.287482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.553 [2024-12-10 05:54:56.292952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.553 [2024-12-10 05:54:56.292973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.553 [2024-12-10 05:54:56.292982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.298390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.298419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.303910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.303930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.303938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.309257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.309278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.309286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.314673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.314695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.314703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.319989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.320010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.320018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.325148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.325169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.325176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.330325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.330345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.330356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.335420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.335440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.335448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.340972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.340992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.341000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.346488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.346508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.346516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.351819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.351839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.351847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.357280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.357306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.363191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.363214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.363228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.368605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.368627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.368635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.373886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.373907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.373915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.379607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.379631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.379639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.384795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.384816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.384824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.389790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.389812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.389820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.395011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.395033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.395040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.400187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.400208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.400222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.405510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.405533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.405541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.411021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.411043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.411052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.416452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.416474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.416482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.421747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.421768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.421776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.427208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.427236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.427244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.432660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.432681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.432689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.437938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.554 [2024-12-10 05:54:56.437959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.554 [2024-12-10 05:54:56.437966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.554 [2024-12-10 05:54:56.443228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.443249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.443256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.448509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.448530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.448537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.453827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.453847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.453855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.459227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.459248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.459256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.464675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.464697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.464704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.469909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.469929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.469940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.475348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.475369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.475378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.480591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.480612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.480619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.485937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.485957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.485965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.491287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.491308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.491316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.496626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.496647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.496655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.555 [2024-12-10 05:54:56.501958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.555 [2024-12-10 05:54:56.501979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.555 [2024-12-10 05:54:56.501987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.813 [2024-12-10 05:54:56.507341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.813 [2024-12-10 05:54:56.507367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-12-10 05:54:56.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.813 [2024-12-10 05:54:56.512764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.813 [2024-12-10 05:54:56.512788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-12-10 05:54:56.512796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.813 [2024-12-10 05:54:56.518135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.813 [2024-12-10 05:54:56.518157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-12-10 05:54:56.518165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.813 [2024-12-10 05:54:56.523532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.813 [2024-12-10 05:54:56.523554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-12-10 05:54:56.523561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.813 [2024-12-10 05:54:56.528750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.813 [2024-12-10 05:54:56.528772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.528780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.533866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.533888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.533896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.540112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.540133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.540141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.546193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.546215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.546230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.553029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.553051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.560606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.560628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.560636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.567642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.567664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.567676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.575461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.575483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.575491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.582672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.582693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.582702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.589814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.589835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.589843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.595832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.595855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.595863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.601179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.601201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.601209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.608050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.608071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.608079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.615388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.615409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.615418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.622570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.622590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.622598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.629974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.629999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.630007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.638186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.638207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.638215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.644949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.644970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.644978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.652281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.652301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.652309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.659397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.659418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.659425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.666972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.666994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.667002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.674167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.674188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.674196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.680897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.680918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.680926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.687893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.687914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.687921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.693760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.693782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.699512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.699532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.699539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.704763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.704783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.704791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.709910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.814 [2024-12-10 05:54:56.709930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-12-10 05:54:56.709937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-12-10 05:54:56.715113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.715132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.715140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.720342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.720370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.725538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.725558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.725566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.730798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.730818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.730825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.735970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.735989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.736001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.741115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.741136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.741143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.746212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.746238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.746246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.751341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.751361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.751369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.756475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.756494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.756502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.815 [2024-12-10 05:54:56.761626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:38.815 [2024-12-10 05:54:56.761646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.815 [2024-12-10 05:54:56.761654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.766892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.766915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.766936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.772145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.772168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.772177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.777289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.777309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.777317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.782424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.782448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.782456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.787571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.787592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.787600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.792633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.792654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.792661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.797755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.797775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.797783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.802904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.802924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.802932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.808017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.808045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.813121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.813140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.813147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.818272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.818291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.818299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.823368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.823388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.823396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.828413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.828432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.828440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.833499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.833519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.833527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.838581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.838601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.838608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.843374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.843403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.848340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.848368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.853303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.853323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.853330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.858352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.858373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.858380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.863290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.073 [2024-12-10 05:54:56.863310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.073 [2024-12-10 05:54:56.863318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.073 [2024-12-10 05:54:56.868316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.868342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.868350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.873369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.873390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.873398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.879571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.879591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.879599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.884837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.884858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.884866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.889957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.889985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 5522.00 IOPS, 690.25 MiB/s [2024-12-10T04:54:57.033Z] [2024-12-10 05:54:56.896496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.896516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.896524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.901680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.901700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.901708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.906867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.906887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.906895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.912061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.912081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.912089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.917466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.917487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.917495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.923439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.923461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.923470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.929232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.929253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.929261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.934522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.934543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.934550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.939865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.939886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.939894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.945388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.945409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.945416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.950734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.950755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.950762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.956118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.956139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.956147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.961368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.961388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.961399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.966606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.966626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.966634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.971752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.971773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.971781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.977020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.977048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.982657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.982678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.982686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.988448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.988470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.988478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.993713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.993734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.993741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:56.998982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:56.999002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:56.999010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:57.004285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:57.004305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.074 [2024-12-10 05:54:57.004313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.074 [2024-12-10 05:54:57.009515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.074 [2024-12-10 05:54:57.009540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.075 [2024-12-10 05:54:57.009547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.075 [2024-12-10 05:54:57.014800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.075 [2024-12-10 05:54:57.014820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.075 [2024-12-10 05:54:57.014828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.075 [2024-12-10 05:54:57.020072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.075 [2024-12-10 05:54:57.020092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.075 [2024-12-10 05:54:57.020100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.025517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.025546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.025560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.030789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.030816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.030828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.035989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.036012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.036020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.041275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.041296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.041304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.046501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.046530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.051826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.051847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.051854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.057039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.057059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.062254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.062274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.062282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.067462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.067482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.072708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.333 [2024-12-10 05:54:57.072728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.333 [2024-12-10 05:54:57.072736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.333 [2024-12-10 05:54:57.077890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.077910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.077919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.083090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.083110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.083118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.088309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.088329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.088337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.094350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.094371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.094379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.100487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.100508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.100521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.105981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.106002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.111305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.111326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.111334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.116891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.116912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.116920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.122508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.122530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.122538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.127777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.127807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.132969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.132990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.132998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.138615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.138638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.138645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.143758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.143788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.148953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.148974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.148982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.152354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.152373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.156658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.156687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.161916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.161936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.161944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.167108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.167129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.167136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.172390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.172411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.177600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.177621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.177629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.182729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.182750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.182757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.187826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.187847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.187858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.192966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.192987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.192994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.198108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.198128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.198136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.203253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.203274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.203281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.208370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.208391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.208399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.213512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.334 [2024-12-10 05:54:57.213533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.334 [2024-12-10 05:54:57.213541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.334 [2024-12-10 05:54:57.218648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.218669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.218676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.223737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.223758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.223765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.228864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.228885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.233950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.233974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.233982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.239093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.239113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.239121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.244249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.244269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.244277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.249401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.249421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.249428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.254549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.254577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.259691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.259711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.259721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.264799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.264819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.264827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.269910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.269930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.269938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.275087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.275107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.275115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.280259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.280280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.280288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.335 [2024-12-10 05:54:57.285439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.335 [2024-12-10 05:54:57.285462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.335 [2024-12-10 05:54:57.285471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.290629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.290651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.290660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.295828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.295851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.295859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.300937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.300957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.300965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.306068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.306089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.306097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.311197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.311223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.311231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.316312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.316332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.316340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.321404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.321425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.321436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.326536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.326556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.326564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.331694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.331723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.336861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.336880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.336888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.341977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.341997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.342005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.347064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.347083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.347091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.352134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.352154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.352162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.357302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.357323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.357331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.362432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.362453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.362460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.367511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.367545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.367553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.372626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.372646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.372654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.377721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.377742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.377750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.382822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.382842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.382851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.387973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.387993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.388001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.392823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.392844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.392852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.398695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.398716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.398724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.403989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.404010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.404018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.409130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.409151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.409162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.414264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.414285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.414292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.419414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.419435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.594 [2024-12-10 05:54:57.419443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.594 [2024-12-10 05:54:57.424685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.594 [2024-12-10 05:54:57.424708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.424717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.429943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.429964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.429972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.435091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.435112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.435119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.440280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.440300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.440308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.445442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.445463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.445471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.450534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.450554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.450564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.455607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.455631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.455639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.460699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.460719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.460727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.465847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.465866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.470937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.470957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.470964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.476119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.476139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.476147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.481058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.481078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.481086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.486258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.486278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.486286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.491395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.491416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.491423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.496562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.496583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.496590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.501737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.501758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.501766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.506898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.506919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.512101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.512122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.512129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.517264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.517284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.517292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.522389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.522409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.522417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.527472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.527492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.527500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.532646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.532667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.532674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.537816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.537835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.537843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.595 [2024-12-10 05:54:57.542978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.595 [2024-12-10 05:54:57.543002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.595 [2024-12-10 05:54:57.543017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.548247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.548270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.548278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.553476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.553499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.553507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.558614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.558635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.558644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.563781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.563802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.563810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.568944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.568965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.568973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.574119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.574140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.574148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.579267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.579287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.579295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.584356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.584376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.584384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.589472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.589497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.589505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.594562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.594583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.594590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.599666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.599686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.599694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.604811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.604831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.604838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.609937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.609957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.609965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.615105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.615125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.615133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.620249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.620270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.620279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.625430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.625450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.625458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.630549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.854 [2024-12-10 05:54:57.630570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.854 [2024-12-10 05:54:57.630577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.854 [2024-12-10 05:54:57.635365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.635387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.635395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.640517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.640539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.640546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.645648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.645668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.645676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.650673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.650694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.650703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.655683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.655704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.655711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.660674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.660695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.660703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.665635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.665656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.665663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.670593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.670614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.670621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.675620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.675642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.675654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.680703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.680723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.680731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.685722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.685742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.685750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.690860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.690881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.690888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.696009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.696029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.696037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.701176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.701197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.701205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.706323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.706344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.706351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.711508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.711529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.711537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.716678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.716698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.716705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.721818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.721838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.721845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.726961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.726981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.726988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.732084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.732104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.732113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.737259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.737278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.737286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.742376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.742396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.742404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.747500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.747521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.747528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.752491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.752511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.752519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.755248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.755268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.755276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.760317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.760336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.760347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.765473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.765493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.765501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.855 [2024-12-10 05:54:57.770495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.855 [2024-12-10 05:54:57.770515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.855 [2024-12-10 05:54:57.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.856 [2024-12-10 05:54:57.775634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.856 [2024-12-10 05:54:57.775655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.856 [2024-12-10 05:54:57.775663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.856 [2024-12-10 05:54:57.780794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.856 [2024-12-10 05:54:57.780815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.856 [2024-12-10 05:54:57.780822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.856 [2024-12-10 05:54:57.785937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.856 [2024-12-10 05:54:57.785958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.856 [2024-12-10 05:54:57.785965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.856 [2024-12-10 05:54:57.791121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.856 [2024-12-10 05:54:57.791142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.856 [2024-12-10 05:54:57.791150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.856 [2024-12-10 05:54:57.797244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.856 [2024-12-10 05:54:57.797266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.856 [2024-12-10 05:54:57.797274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.856 [2024-12-10 05:54:57.804585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:39.856 [2024-12-10 05:54:57.804610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.856 [2024-12-10 05:54:57.804622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.113 [2024-12-10 05:54:57.812142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.113 [2024-12-10 05:54:57.812171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.113 [2024-12-10 05:54:57.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.113 [2024-12-10 05:54:57.819550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.819573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.819581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.827139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.827162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.827170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.834452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.834474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.834483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.842262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.842285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.842294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.849745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.849767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.849775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.857610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.857632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.857640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.865076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.865098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.865106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.872800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.872821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.872829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.880270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.880292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.880300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.114 [2024-12-10 05:54:57.888041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.888063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.888071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.114 5666.50 IOPS, 708.31 MiB/s [2024-12-10T04:54:58.073Z] [2024-12-10 05:54:57.896786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21148e0) 00:29:40.114 [2024-12-10 05:54:57.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.114 [2024-12-10 05:54:57.896817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.114 00:29:40.114 Latency(us) 00:29:40.114 [2024-12-10T04:54:58.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.114 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:40.114 nvme0n1 : 2.00 5665.90 708.24 0.00 0.00 2820.48 635.86 13668.94 00:29:40.114 [2024-12-10T04:54:58.073Z] =================================================================================================================== 00:29:40.114 [2024-12-10T04:54:58.073Z] Total : 5665.90 708.24 0.00 0.00 2820.48 635.86 13668.94 00:29:40.114 { 00:29:40.114 "results": [ 00:29:40.114 { 00:29:40.114 "job": "nvme0n1", 00:29:40.114 "core_mask": "0x2", 00:29:40.114 "workload": "randread", 00:29:40.114 "status": "finished", 00:29:40.114 "queue_depth": 16, 00:29:40.114 "io_size": 131072, 00:29:40.114 "runtime": 2.003034, 00:29:40.114 "iops": 5665.904822384443, 00:29:40.114 "mibps": 708.2381027980554, 00:29:40.114 "io_failed": 0, 00:29:40.114 "io_timeout": 0, 00:29:40.114 "avg_latency_us": 2820.4832680873915, 00:29:40.114 "min_latency_us": 635.8552380952381, 00:29:40.114 "max_latency_us": 13668.937142857143 00:29:40.114 } 00:29:40.114 ], 00:29:40.114 "core_count": 1 00:29:40.114 } 00:29:40.114 05:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.114 05:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.114 05:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.114 | .driver_specific 00:29:40.114 | .nvme_error 00:29:40.114 | .status_code 00:29:40.114 | .command_transient_transport_error' 00:29:40.114 05:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 367 > 0 )) 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 294682 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 294682 ']' 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 294682 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294682 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294682' 00:29:40.371 killing process with pid 294682 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 294682 00:29:40.371 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.371 00:29:40.371 Latency(us) 00:29:40.371 [2024-12-10T04:54:58.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.371 [2024-12-10T04:54:58.330Z] =================================================================================================================== 00:29:40.371 [2024-12-10T04:54:58.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 294682 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:40.371 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=295313 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 295313 /var/tmp/bperf.sock 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 295313 ']' 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.629 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.629 [2024-12-10 05:54:58.370497] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:40.629 [2024-12-10 05:54:58.370543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295313 ] 00:29:40.629 [2024-12-10 05:54:58.450119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.629 [2024-12-10 05:54:58.488470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.886 05:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.451 nvme0n1 00:29:41.451 05:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:41.451 05:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.451 05:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.451 05:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.451 05:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:41.451 05:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:41.451 Running I/O for 2 seconds... 00:29:41.451 [2024-12-10 05:54:59.311415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.311546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.311572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.320774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.320894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.320915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.330189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.330314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.330332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.339685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.339802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.339820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.349145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.349271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.349289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.358573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.358691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.358708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.367950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.368066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.368084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.377342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.377458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.377476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.386743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.386860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.386877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.451 [2024-12-10 05:54:59.396121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.451 [2024-12-10 05:54:59.396273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.451 [2024-12-10 05:54:59.396290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.405929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.406056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.406077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.415455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.415574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.415594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.424854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.424977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.424994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.434233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.434349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.434370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.443634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.443778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.443795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.452997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.453129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.453147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.462453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.462575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.462592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.471835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.471949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.471967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.481186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.481307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.481325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.490573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.490687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.490704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.499921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.500035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.500052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.509295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.509412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.509430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.518636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.518759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.518776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.709 [2024-12-10 05:54:59.527970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.709 [2024-12-10 05:54:59.528086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.709 [2024-12-10 05:54:59.528103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.537372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.537487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.537504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.546723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.546837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.546855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.556046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.556160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.556177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.565395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.565529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.575262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.575380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.575398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.584863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.584981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.584999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.594783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.594901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.594919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.604370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.604491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.604510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.614073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.614192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.614210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.623615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.623746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.623763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.633042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.633157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.633174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.642566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.642683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.642700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.651982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.652095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.652112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.710 [2024-12-10 05:54:59.661557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.710 [2024-12-10 05:54:59.661676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.710 [2024-12-10 05:54:59.661697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.671152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.671278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.671300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.680501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.680617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.680639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.689872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.689988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.690006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.699344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.699459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.699477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.708675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.708791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.708809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.718042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.718156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.718173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.727378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.727497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.727515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.736735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.736869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.736886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.746107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.746225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.746242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.755445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.755559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.755576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.764816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.764933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.764950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.774139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.774264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.774281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.783478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.783592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.783609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.792851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.792966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.792984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.802175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.802296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.811521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.811634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.811651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.820898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.821014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.969 [2024-12-10 05:54:59.821031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.969 [2024-12-10 05:54:59.830493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.969 [2024-12-10 05:54:59.830608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.830625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.839846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.839959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.839975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.849190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.849314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.849331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.858595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.858707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.858724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.867963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.868097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.868114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.877347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.877460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.877477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.886685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.886798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.886815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.896045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.896162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.896179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.905378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.905492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.905509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:41.970 [2024-12-10 05:54:59.914728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:41.970 [2024-12-10 05:54:59.914842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.970 [2024-12-10 05:54:59.914859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.924425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.924547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.924570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.933905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.934021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.934040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.943285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.943419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.943438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.952682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.952796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.952813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.962050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.962183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.962200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.971435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.971550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.971566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.980778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.980893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.980911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.990133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.990255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.990272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:54:59.999579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:54:59.999694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:54:59.999711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.009160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.009287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.009305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.019449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.019567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.019584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.029070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.029187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.039079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.039221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.039241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.048904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.049024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.049041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.058537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.058653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.058670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.068178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.068304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.068322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.077796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.077915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.077932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.228 [2024-12-10 05:55:00.087942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.228 [2024-12-10 05:55:00.088076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.228 [2024-12-10 05:55:00.088097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.097608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.097727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.097744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.107198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.107322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.107339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.116809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.116942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.116960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.126747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.126865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.126884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.136364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.136481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.136498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.146005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.146122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.146139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.155637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.155755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.155772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.165225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.165343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.165360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.229 [2024-12-10 05:55:00.174875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.229 [2024-12-10 05:55:00.174997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.229 [2024-12-10 05:55:00.175015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.184660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.184785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.184805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.194313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.194434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.194452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.203915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.204035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.204055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.213517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.213633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.213650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.223120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.223260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.223278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.232827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.232946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.232963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.242442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.242562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.242580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.252058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.252175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.252193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.261616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.261734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.261751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.271226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.271345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.271363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.280864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.280980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.280998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.290463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.290580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.290597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 26638.00 IOPS, 104.05 MiB/s [2024-12-10T04:55:00.445Z] [2024-12-10 05:55:00.300069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.300187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.300205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.309695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.309811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.309829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.319292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.319426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.328935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.329053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.329070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.338614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.338730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.338751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.486 [2024-12-10 05:55:00.348262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.486 [2024-12-10 05:55:00.348380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.486 [2024-12-10 05:55:00.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.357841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.357956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.357973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.367424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.367538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.367555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.377087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.377202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.377223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.386712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.386827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.386845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.396293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.396411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.396430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.405921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.406038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.406055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.415513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.415644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.415662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.425138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.425266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.425283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.487 [2024-12-10 05:55:00.434769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.487 [2024-12-10 05:55:00.434884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.487 [2024-12-10 05:55:00.434901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.744 [2024-12-10 05:55:00.444584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.444706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.444726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.454179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.454311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.454330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.463796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.463928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.463946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.473394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.473511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.473528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.482991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.483110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.483128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.492604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.492720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.492737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.502206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.502346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.502364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.511861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.511991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.512008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.521457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.521573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.521590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.531051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.531167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.531184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.540663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.540782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.540799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.550290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.550406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.550423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.559936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.560053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.560071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.569747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.569863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.569880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.579332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.579465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.579483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.588993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.589111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.589132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.598635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.598749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.598766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.608214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.608332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.608349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.617849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.617962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.617979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.627439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.627552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.627569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.637075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.637191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.637208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.646732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.646846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.646863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.656415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.656531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.656548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.666048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.666167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.666187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.675639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.675773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.685305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.685425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.685442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.745 [2024-12-10 05:55:00.694971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:42.745 [2024-12-10 05:55:00.695090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.745 [2024-12-10 05:55:00.695109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.704824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.704945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.704966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.714467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.714584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.714602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.724037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.724154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.724172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.733640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.733757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.743290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.743408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.752905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.753032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.753050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.762540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.762675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.762693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.772166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.772290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.772308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.781763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.781881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.781898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.791398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.791515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.791532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.800993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.801107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.801124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.810626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.810742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.810760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.820261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.820378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.820394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.829852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.829967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.829984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.839537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.839653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.849151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.849275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.858820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.858937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.858955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.868400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.868515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.868533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.878026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.878143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.878160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.887586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.887700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.887717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.897226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.897345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.906830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.906962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.906979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.004 [2024-12-10 05:55:00.916493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.004 [2024-12-10 05:55:00.916612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.004 [2024-12-10 05:55:00.916629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.005 [2024-12-10 05:55:00.926092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.005 [2024-12-10 05:55:00.926213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.005 [2024-12-10 05:55:00.926236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.005 [2024-12-10 05:55:00.935720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.005 [2024-12-10 05:55:00.935836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.005 [2024-12-10 05:55:00.935853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.005 [2024-12-10 05:55:00.945315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.005 [2024-12-10 05:55:00.945432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.005 [2024-12-10 05:55:00.945450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.005 [2024-12-10 05:55:00.954984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.005 [2024-12-10 05:55:00.955130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.005 [2024-12-10 05:55:00.955154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:00.964704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:00.964826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:00.964847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:00.974363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:00.974497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:00.974516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:00.984009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:00.984144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:00.984162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:00.993650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:00.993769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:00.993786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.003189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.003331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.003348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.012812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.012929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.012946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.022190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.022314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.022331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.031557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.031673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.031690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.040923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.041040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.041057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.050311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.050444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.050462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.059716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.059829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.059845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.069089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.069205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.069227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.078464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.078578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.078595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.087799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.087912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.087931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.097444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.097559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.097576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.106790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.106904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.106921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.116129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.116248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.116265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.125496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.125610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.125627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.134832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.134944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.134961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.144423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.144557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.144575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.153879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.153991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.154008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.163211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.163352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.163370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.172629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.172743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.172763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.181983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.182095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.182113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.191323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.263 [2024-12-10 05:55:01.191436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.263 [2024-12-10 05:55:01.191454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.263 [2024-12-10 05:55:01.200687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.264 [2024-12-10 05:55:01.200799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.264 [2024-12-10 05:55:01.200816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.264 [2024-12-10 05:55:01.210040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.264 [2024-12-10 05:55:01.210153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.264 [2024-12-10 05:55:01.210169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.219782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.219901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.219922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.229200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.229326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.229344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.238543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.238675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.238692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.247953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.248069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.248088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.257296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.257414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.257431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.266657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.266770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.266788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.276095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.276209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.276231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.285458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.285574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.285591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 [2024-12-10 05:55:01.294802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c6e30) with pdu=0x200016efda78 00:29:43.521 [2024-12-10 05:55:01.294916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.521 [2024-12-10 05:55:01.294933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.521 26682.50 IOPS, 104.23 MiB/s 00:29:43.521 Latency(us) 00:29:43.521 [2024-12-10T04:55:01.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.521 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.521 nvme0n1 : 2.01 26688.38 104.25 0.00 0.00 4788.06 3386.03 14168.26 00:29:43.521 [2024-12-10T04:55:01.480Z] =================================================================================================================== 00:29:43.521 [2024-12-10T04:55:01.480Z] Total : 26688.38 104.25 0.00 0.00 4788.06 3386.03 14168.26 00:29:43.521 { 00:29:43.521 "results": [ 00:29:43.521 { 00:29:43.521 "job": "nvme0n1", 00:29:43.521 "core_mask": "0x2", 00:29:43.521 "workload": "randwrite", 00:29:43.521 "status": "finished", 00:29:43.521 "queue_depth": 128, 00:29:43.521 "io_size": 4096, 00:29:43.521 "runtime": 2.005854, 00:29:43.521 "iops": 26688.383102658518, 00:29:43.521 "mibps": 104.25149649475983, 00:29:43.521 "io_failed": 0, 00:29:43.521 "io_timeout": 0, 00:29:43.521 "avg_latency_us": 4788.055608245204, 00:29:43.521 "min_latency_us": 3386.0266666666666, 00:29:43.521 "max_latency_us": 14168.259047619047 00:29:43.521 } 00:29:43.521 ], 00:29:43.521 "core_count": 1 00:29:43.521 } 00:29:43.521 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:43.521 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:43.521 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:43.521 | .driver_specific 00:29:43.521 | .nvme_error 00:29:43.521 | .status_code 00:29:43.521 | .command_transient_transport_error' 00:29:43.521 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 295313 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 295313 ']' 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 295313 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295313 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295313' 00:29:43.779 killing process with pid 295313 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 295313 00:29:43.779 Received shutdown signal, test time was about 2.000000 seconds 00:29:43.779 00:29:43.779 Latency(us) 00:29:43.779 [2024-12-10T04:55:01.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.779 [2024-12-10T04:55:01.738Z] =================================================================================================================== 00:29:43.779 [2024-12-10T04:55:01.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 295313 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=295784 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 295784 /var/tmp/bperf.sock 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 295784 ']' 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:43.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.779 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.037 [2024-12-10 05:55:01.772100] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:44.037 [2024-12-10 05:55:01.772146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295784 ] 00:29:44.037 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.037 Zero copy mechanism will not be used. 00:29:44.037 [2024-12-10 05:55:01.850304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.037 [2024-12-10 05:55:01.890288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.037 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.037 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:44.037 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.037 05:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.294 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:44.294 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.294 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.294 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.294 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.294 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.551 nvme0n1 00:29:44.551 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:44.551 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.551 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.551 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.551 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:44.551 05:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.551 Zero copy mechanism will not be used. 00:29:44.551 Running I/O for 2 seconds... 00:29:44.810 [2024-12-10 05:55:02.508598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.508708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.508736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.514134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.514202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.514230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.519009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.519079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.519103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.523923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.523994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.524014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.528849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.528914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.528932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.533467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.533535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.533554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.538251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.538322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.538340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.543053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.543141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.543158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.547857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.547937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.547955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.552626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.552695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.552713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.557451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.557542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.562209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.562289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.562307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.567018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.567082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.810 [2024-12-10 05:55:02.567100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.810 [2024-12-10 05:55:02.571867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.810 [2024-12-10 05:55:02.571935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.571953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.576898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.576978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.576996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.582007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.582066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.582085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.587670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.587724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.587742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.592918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.592982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.593000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.598677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.598732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.604042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.604100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.604118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.609022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.609143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.609162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.614048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.614113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.614131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.618858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.618913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.618931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.623388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.623466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.627992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.628060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.632432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.632530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.632548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.637009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.637080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.637098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.641508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.641564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.641582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.645950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.646023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.646044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.650568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.650670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.655153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.655208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.659571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.659627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.659645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.664769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.664941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.664959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.670890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.671044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.676989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.677109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.677128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.683417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.683580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.683598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.689793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.689956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.696129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.696233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.696252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.701775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.701950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.701968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.708272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.708424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.708441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.811 [2024-12-10 05:55:02.714661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.811 [2024-12-10 05:55:02.714820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.811 [2024-12-10 05:55:02.714838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.721487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.721643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.728041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.728188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.728206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.734828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.734984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.735002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.741053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.741226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.741244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.747413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.747588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.747606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.753691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.753855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.753874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.812 [2024-12-10 05:55:02.760224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:44.812 [2024-12-10 05:55:02.760374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.812 [2024-12-10 05:55:02.760395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.767152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.767297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-10 05:55:02.767318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.773945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.774095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-10 05:55:02.774115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.780406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.780531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-10 05:55:02.780551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.786836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.786987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-10 05:55:02.787005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.793195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.793350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-10 05:55:02.793368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.799341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.799520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-10 05:55:02.799537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.070 [2024-12-10 05:55:02.805805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.070 [2024-12-10 05:55:02.805980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.806003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.812378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.812521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.812539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.819358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.819523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.819541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.825731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.825896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.825914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.832124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.832284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.832301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.838335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.838478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.838497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.844545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.844710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.844727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.851059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.851223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.851241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.857798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.857941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.857959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.864572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.864701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.864719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.871237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.871383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.871400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.877532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.877692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.877710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.883724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.883884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.883902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.889985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.890129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.890147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.896839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.897015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.897033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.903342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.903493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.903511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.909670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.909854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.909872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.916252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.916398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.916416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.922640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.922792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.922810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.928917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.929067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.929084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.935265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.935419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.941740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.941916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.941934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.948686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.948845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.948863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.955598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.955775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.955792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.961849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.962010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.962027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.968244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.968378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.968396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.974732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.974889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.974911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.980899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.071 [2024-12-10 05:55:02.981085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-10 05:55:02.981103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.071 [2024-12-10 05:55:02.987246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.072 [2024-12-10 05:55:02.987403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-10 05:55:02.987421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.072 [2024-12-10 05:55:02.993956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.072 [2024-12-10 05:55:02.994107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-10 05:55:02.994125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.072 [2024-12-10 05:55:03.000895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.072 [2024-12-10 05:55:03.001046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-10 05:55:03.001063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.072 [2024-12-10 05:55:03.007189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.072 [2024-12-10 05:55:03.007354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-10 05:55:03.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.072 [2024-12-10 05:55:03.014075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.072 [2024-12-10 05:55:03.014241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-10 05:55:03.014259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.072 [2024-12-10 05:55:03.020781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.072 [2024-12-10 05:55:03.020935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-10 05:55:03.020957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.027171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.027323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.027344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.033928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.034093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.034113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.040626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.040805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.040824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.046959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.047125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.047143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.053230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.053379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.053397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.059725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.059900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.059918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.065958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.066112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.066130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.072266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.072423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.072440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.330 [2024-12-10 05:55:03.078916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.330 [2024-12-10 05:55:03.079075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.330 [2024-12-10 05:55:03.079094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.085315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.085468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.091644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.091803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.091820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.098308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.098478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.104703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.104857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.104875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.111080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.111212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.111237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.118483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.118542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.118561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.125269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.125406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.125424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.133274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.133408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.133426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.140703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.140854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.140873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.148979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.149109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.149131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.155926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.156079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.156097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.162895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.163054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.163072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.169766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.169926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.169945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.177109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.177281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.177300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.184711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.184852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.184870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.191584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.191722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.191740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.197858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.197912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.197931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.203364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.203451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.203470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.208196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.208273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.208292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.212768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.212834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.212852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.217318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.217377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.217394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.221812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.221881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.221899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.226354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.226428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.226445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.230787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.230860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.230878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.235262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.235318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.235335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.239711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.239770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.239788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.244238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.331 [2024-12-10 05:55:03.244301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.331 [2024-12-10 05:55:03.244319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.331 [2024-12-10 05:55:03.248718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.248798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.248817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.253196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.253290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.257674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.257731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.257749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.262089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.262152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.262170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.266560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.266638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.266656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.271106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.271167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.271185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.275739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.275800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.275818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.332 [2024-12-10 05:55:03.280399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.332 [2024-12-10 05:55:03.280457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.332 [2024-12-10 05:55:03.280479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.590 [2024-12-10 05:55:03.285191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.590 [2024-12-10 05:55:03.285257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.590 [2024-12-10 05:55:03.285281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.590 [2024-12-10 05:55:03.290399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.290454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.290474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.295967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.296111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.296132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.301390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.301515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.301534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.306412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.306475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.306493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.311602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.311666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.311684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.316425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.316484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.316502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.321136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.321254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.321272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.326659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.326789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.326807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.332183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.332256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.332275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.337280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.337409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.337427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.342178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.342251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.342269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.346945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.347015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.347033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.351597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.351693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.351711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.356409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.356526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.356544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.361173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.361291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.361309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.365888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.365947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.365965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.370654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.370707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.370724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.375397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.375502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.375519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.380171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.380233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.380251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.384885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.384953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.384972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.389537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.389607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.389626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.394425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.394510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.394528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.399029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.399088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.399106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.403670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.403735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.403753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.408392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.408495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.408512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.413371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.413529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.413550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.418532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.591 [2024-12-10 05:55:03.418623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.591 [2024-12-10 05:55:03.418641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.591 [2024-12-10 05:55:03.423944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.424141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.424160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.429604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.429693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.429711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.435083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.435160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.435179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.439780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.439896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.439914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.444509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.444581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.444599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.449104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.449174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.449193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.453785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.453892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.453910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.459415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.459495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.459513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.464293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.464366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.464383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.470750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.470905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.470923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.477251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.477341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.477359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.483944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.484073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.484090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.489545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.489601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.489618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.494946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.495087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.495105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.500113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.500186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.592 5395.00 IOPS, 674.38 MiB/s [2024-12-10T04:55:03.551Z] [2024-12-10 05:55:03.506064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.506134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.506152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.510855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.510909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.510926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.515619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.515678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.515696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.520206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.520277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.520294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.524991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.525051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.525069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.529846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.529901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.529920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.534629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.534717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.592 [2024-12-10 05:55:03.539501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.592 [2024-12-10 05:55:03.539584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.592 [2024-12-10 05:55:03.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.544253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.544327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.544349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.548991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.549110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.549135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.553718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.553777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.553796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.559054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.559110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.559128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.564374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.564444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.570035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.570174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.570192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.575183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.575315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.575333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.580408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.580465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.580483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.586319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.851 [2024-12-10 05:55:03.586396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.851 [2024-12-10 05:55:03.586414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.851 [2024-12-10 05:55:03.591573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.591628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.591646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.596828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.596960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.596979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.602506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.602576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.602594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.607732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.607797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.607815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.613226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.613287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.613305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.618603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.618715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.618733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.624337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.624469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.624488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.629850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.629915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.629933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.634621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.634698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.634717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.639442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.639511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.639531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.644064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.644173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.644192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.648591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.648684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.648703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.653156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.653209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.653234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.657773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.657836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.657853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.662264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.662333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.662352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.666829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.666894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.666912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.671334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.671393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.671412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.675890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.675945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.675963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.680376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.680461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.684830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.684889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.684907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.689313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.689395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.693708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.693764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.693782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.698163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.698244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.702622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.702675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.702693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.707049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.707108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.707125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.711546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.711598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.711615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.715980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.716050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.852 [2024-12-10 05:55:03.716068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.852 [2024-12-10 05:55:03.720388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.852 [2024-12-10 05:55:03.720456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.720473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.724798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.724856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.724874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.729237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.729292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.729309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.733669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.733721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.738090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.738143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.738161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.742516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.742586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.742604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.746998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.747051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.747069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.751486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.751552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.751569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.756317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.756491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.756511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.762054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.762226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.762245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.768103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.768300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.768319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.774591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.774755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.774775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.780393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.780470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.780490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.785125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.785192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.785210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.789791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.789856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.789874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.794405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.794465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.794483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.798949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:45.853 [2024-12-10 05:55:03.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.853 [2024-12-10 05:55:03.799033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.853 [2024-12-10 05:55:03.803636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.803702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.803731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.808240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.808301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.808321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.812864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.812922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.812942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.817492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.817559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.822046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.822125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.822145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.826528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.826589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.826608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.831047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.831111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.831129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.835616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.835679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.835697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.840543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.840607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.840626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.845364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.845443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.845462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.850548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.850847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.850867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.856403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.856715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.856735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.862825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.863193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.863213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.868539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.868824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.868845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.874432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.874761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.874781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.880700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.881084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.881105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.886999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.887347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.887367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.893106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.893479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.893499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.899648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.900015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.900035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.905898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.906267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.912515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.912765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.912784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.919102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.919412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.919432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.112 [2024-12-10 05:55:03.925944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.112 [2024-12-10 05:55:03.926319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.112 [2024-12-10 05:55:03.926339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.932827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.933086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.933106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.938086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.938352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.938372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.942893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.943154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.943173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.947997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.948493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.948519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.952983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.953252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.953271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.957442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.957717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.957737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.961708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.961985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.962004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.966047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.966317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.966336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.970383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.970650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.970669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.974779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.975053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.975072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.979271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.979543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.979563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.983477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.983750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.983769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.988031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.988308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.988327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.992564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.992842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.992861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:03.997579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:03.997847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:03.997866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.002568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.002840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.002859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.007570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.007827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.007846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.013396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.013758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.013778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.020940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.021248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.021267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.027245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.027517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.027536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.033069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.033372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.033391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.039464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.039826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.039845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.046075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.046381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.046401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.053329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.053651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.053671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.113 [2024-12-10 05:55:04.060146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.113 [2024-12-10 05:55:04.060527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.113 [2024-12-10 05:55:04.060547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.066909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.067193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.067216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.072532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.072799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.072821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.078423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.078683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.078703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.083135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.083402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.083421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.088356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.088616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.088640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.093374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.093640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.093660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.098546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.098800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.098819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.103656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.103918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.103938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.108356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.108614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.108633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.113404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.113663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.113683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.118203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.118475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.118495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.123254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.123515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.123534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.128498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.372 [2024-12-10 05:55:04.128768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.372 [2024-12-10 05:55:04.128787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.372 [2024-12-10 05:55:04.133077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.133336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.133356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.138075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.138344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.138364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.143032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.143294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.143313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.148354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.148624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.148642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.153253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.153543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.153563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.157866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.158165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.162285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.162552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.162571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.166589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.166866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.166885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.170805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.171094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.171114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.175049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.175328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.175347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.179276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.179558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.179578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.183507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.183783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.183802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.187770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.188043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.188062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.191970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.192258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.192277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.196294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.196564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.196583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.200655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.200931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.200950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.205982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.206276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.206295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.210898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.211170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.211195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.215609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.215886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.215906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.221155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.221434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.221453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.226179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.226455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.226473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.231170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.231446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.231466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.235966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.236248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.236267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.241007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.241284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.241302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.245756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.246019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.246039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.251123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.251390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.251409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.256350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.256631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.256650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.373 [2024-12-10 05:55:04.261542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.373 [2024-12-10 05:55:04.261809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.373 [2024-12-10 05:55:04.261828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.268512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.268912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.268932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.275708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.275995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.276014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.281789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.282066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.282085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.287152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.287430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.287450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.292579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.292857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.292876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.297557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.297818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.297837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.302359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.302619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.302637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.307371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.307642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.307661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.312270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.312536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.312555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.317149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.317416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.317435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.374 [2024-12-10 05:55:04.321862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.374 [2024-12-10 05:55:04.322156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.374 [2024-12-10 05:55:04.322177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.326617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.326901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.326923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.331393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.331657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.331679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.336338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.336609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.336629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.342246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.342597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.342617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.349110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.349489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.349513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.355650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.355995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.356014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.362093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.362379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.362399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.368813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.369160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.369180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.375894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.376215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.376240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.382388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.382648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.382667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.387170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.387457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.387476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.392074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.392346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.392366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.397012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.397301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.397320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.401844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.402138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.402157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.406772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.407054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.407074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.411576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.411847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.632 [2024-12-10 05:55:04.411866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.632 [2024-12-10 05:55:04.416660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.632 [2024-12-10 05:55:04.416921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.416941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.421504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.421790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.421809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.426231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.426536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.426555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.430774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.431042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.431061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.436134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.436492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.436512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.442260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.442406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.447758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.448093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.448112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.454503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.454853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.454872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.459921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.460212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.464275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.464543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.464563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.468662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.468948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.468968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.473017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.473302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.473321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.477357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.477620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.477639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.481690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.481967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.481986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.486077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.486365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.486388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.490480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.490760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.490780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.494814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.495095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.495113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.633 [2024-12-10 05:55:04.499208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.499488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.499507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.633 5733.50 IOPS, 716.69 MiB/s [2024-12-10T04:55:04.592Z] [2024-12-10 05:55:04.504517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c7310) with pdu=0x200016eff3c8 00:29:46.633 [2024-12-10 05:55:04.504617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.633 [2024-12-10 05:55:04.504636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.633 00:29:46.633 Latency(us) 00:29:46.633 [2024-12-10T04:55:04.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.633 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:46.633 nvme0n1 : 2.00 5732.31 716.54 0.00 0.00 2786.92 2012.89 13544.11 00:29:46.633 [2024-12-10T04:55:04.592Z] =================================================================================================================== 00:29:46.633 [2024-12-10T04:55:04.592Z] Total : 5732.31 716.54 0.00 0.00 2786.92 2012.89 13544.11 00:29:46.633 { 00:29:46.633 "results": [ 00:29:46.633 { 00:29:46.633 "job": "nvme0n1", 00:29:46.633 "core_mask": "0x2", 00:29:46.633 "workload": "randwrite", 00:29:46.633 "status": "finished", 00:29:46.633 "queue_depth": 16, 00:29:46.633 "io_size": 131072, 00:29:46.633 "runtime": 2.003205, 00:29:46.633 "iops": 5732.313966868094, 00:29:46.633 "mibps": 716.5392458585118, 00:29:46.633 "io_failed": 0, 00:29:46.633 "io_timeout": 0, 00:29:46.633 "avg_latency_us": 2786.921504999108, 00:29:46.633 "min_latency_us": 2012.8914285714286, 00:29:46.633 "max_latency_us": 13544.106666666667 00:29:46.633 } 00:29:46.633 ], 00:29:46.633 "core_count": 1 00:29:46.633 } 00:29:46.633 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:46.633 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:46.633 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:46.633 | .driver_specific 00:29:46.633 | .nvme_error 00:29:46.633 | .status_code 00:29:46.633 | .command_transient_transport_error' 00:29:46.633 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 371 > 0 )) 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 295784 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 295784 ']' 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 295784 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295784 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295784' 00:29:46.891 killing process with pid 295784 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 295784 00:29:46.891 Received shutdown signal, test time was about 2.000000 seconds 00:29:46.891 00:29:46.891 Latency(us) 00:29:46.891 [2024-12-10T04:55:04.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.891 [2024-12-10T04:55:04.850Z] =================================================================================================================== 00:29:46.891 [2024-12-10T04:55:04.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.891 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 295784 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 293924 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 293924 ']' 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 293924 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 293924 00:29:47.149 05:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.149 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.149 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 293924' 00:29:47.149 killing process with pid 293924 00:29:47.149 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 293924 00:29:47.149 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 293924 00:29:47.407 00:29:47.407 real 0m14.542s 00:29:47.407 user 0m27.256s 00:29:47.407 sys 0m4.568s 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.407 ************************************ 00:29:47.407 END TEST nvmf_digest_error 00:29:47.407 ************************************ 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.407 rmmod nvme_tcp 00:29:47.407 rmmod nvme_fabrics 00:29:47.407 rmmod nvme_keyring 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 293924 ']' 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 293924 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 293924 ']' 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 293924 00:29:47.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (293924) - No such process 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 293924 is not found' 00:29:47.407 Process with pid 293924 is not found 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.407 05:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.945 00:29:49.945 real 0m37.744s 00:29:49.945 user 0m56.035s 00:29:49.945 sys 0m14.341s 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:49.945 ************************************ 00:29:49.945 END TEST nvmf_digest 00:29:49.945 ************************************ 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.945 ************************************ 00:29:49.945 START TEST nvmf_bdevperf 00:29:49.945 ************************************ 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:49.945 * Looking for test storage... 00:29:49.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:49.945 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.946 --rc genhtml_branch_coverage=1 00:29:49.946 --rc genhtml_function_coverage=1 00:29:49.946 --rc genhtml_legend=1 00:29:49.946 --rc geninfo_all_blocks=1 00:29:49.946 --rc geninfo_unexecuted_blocks=1 00:29:49.946 00:29:49.946 ' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.946 --rc genhtml_branch_coverage=1 00:29:49.946 --rc genhtml_function_coverage=1 00:29:49.946 --rc genhtml_legend=1 00:29:49.946 --rc geninfo_all_blocks=1 00:29:49.946 --rc geninfo_unexecuted_blocks=1 00:29:49.946 00:29:49.946 ' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.946 --rc genhtml_branch_coverage=1 00:29:49.946 --rc genhtml_function_coverage=1 00:29:49.946 --rc genhtml_legend=1 00:29:49.946 --rc geninfo_all_blocks=1 00:29:49.946 --rc geninfo_unexecuted_blocks=1 00:29:49.946 00:29:49.946 ' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.946 --rc genhtml_branch_coverage=1 00:29:49.946 --rc genhtml_function_coverage=1 00:29:49.946 --rc genhtml_legend=1 00:29:49.946 --rc geninfo_all_blocks=1 00:29:49.946 --rc geninfo_unexecuted_blocks=1 00:29:49.946 00:29:49.946 ' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.946 05:55:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:56.514 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:56.514 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.514 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:56.515 Found net devices under 0000:af:00.0: cvl_0_0 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:56.515 Found net devices under 0000:af:00.1: cvl_0_1 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:29:56.515 00:29:56.515 --- 10.0.0.2 ping statistics --- 00:29:56.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.515 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:56.515 00:29:56.515 --- 10.0.0.1 ping statistics --- 00:29:56.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.515 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.515 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=300258 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 300258 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 300258 ']' 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.773 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:56.773 [2024-12-10 05:55:14.544336] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:56.773 [2024-12-10 05:55:14.544378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.773 [2024-12-10 05:55:14.628795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:56.773 [2024-12-10 05:55:14.669280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.773 [2024-12-10 05:55:14.669315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.773 [2024-12-10 05:55:14.669322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.773 [2024-12-10 05:55:14.669328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.773 [2024-12-10 05:55:14.669332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.773 [2024-12-10 05:55:14.670618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.773 [2024-12-10 05:55:14.670724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.773 [2024-12-10 05:55:14.670726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.036 [2024-12-10 05:55:14.806750] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.036 Malloc0 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.036 [2024-12-10 05:55:14.867584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:57.036 { 00:29:57.036 "params": { 00:29:57.036 "name": "Nvme$subsystem", 00:29:57.036 "trtype": "$TEST_TRANSPORT", 00:29:57.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:57.036 "adrfam": "ipv4", 00:29:57.036 "trsvcid": "$NVMF_PORT", 00:29:57.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:57.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:57.036 "hdgst": ${hdgst:-false}, 00:29:57.036 "ddgst": ${ddgst:-false} 00:29:57.036 }, 00:29:57.036 "method": "bdev_nvme_attach_controller" 00:29:57.036 } 00:29:57.036 EOF 00:29:57.036 )") 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:57.036 05:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:57.036 "params": { 00:29:57.036 "name": "Nvme1", 00:29:57.036 "trtype": "tcp", 00:29:57.036 "traddr": "10.0.0.2", 00:29:57.036 "adrfam": "ipv4", 00:29:57.036 "trsvcid": "4420", 00:29:57.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:57.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:57.036 "hdgst": false, 00:29:57.036 "ddgst": false 00:29:57.036 }, 00:29:57.036 "method": "bdev_nvme_attach_controller" 00:29:57.036 }' 00:29:57.036 [2024-12-10 05:55:14.919113] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:57.036 [2024-12-10 05:55:14.919154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300289 ] 00:29:57.296 [2024-12-10 05:55:15.000594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.296 [2024-12-10 05:55:15.040332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.552 Running I/O for 1 seconds... 00:29:58.481 11300.00 IOPS, 44.14 MiB/s 00:29:58.481 Latency(us) 00:29:58.481 [2024-12-10T04:55:16.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.481 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:58.481 Verification LBA range: start 0x0 length 0x4000 00:29:58.481 Nvme1n1 : 1.01 11294.89 44.12 0.00 0.00 11289.65 2246.95 15104.49 00:29:58.481 [2024-12-10T04:55:16.440Z] =================================================================================================================== 00:29:58.481 [2024-12-10T04:55:16.440Z] Total : 11294.89 44.12 0.00 0.00 11289.65 2246.95 15104.49 00:29:58.481 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=300519 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.738 { 00:29:58.738 "params": { 00:29:58.738 "name": "Nvme$subsystem", 00:29:58.738 "trtype": "$TEST_TRANSPORT", 00:29:58.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.738 "adrfam": "ipv4", 00:29:58.738 "trsvcid": "$NVMF_PORT", 00:29:58.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.738 "hdgst": ${hdgst:-false}, 00:29:58.738 "ddgst": ${ddgst:-false} 00:29:58.738 }, 00:29:58.738 "method": "bdev_nvme_attach_controller" 00:29:58.738 } 00:29:58.738 EOF 00:29:58.738 )") 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:58.738 05:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.738 "params": { 00:29:58.738 "name": "Nvme1", 00:29:58.738 "trtype": "tcp", 00:29:58.738 "traddr": "10.0.0.2", 00:29:58.738 "adrfam": "ipv4", 00:29:58.738 "trsvcid": "4420", 00:29:58.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.738 "hdgst": false, 00:29:58.738 "ddgst": false 00:29:58.738 }, 00:29:58.738 "method": "bdev_nvme_attach_controller" 00:29:58.739 }' 00:29:58.739 [2024-12-10 05:55:16.474731] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:29:58.739 [2024-12-10 05:55:16.474778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300519 ] 00:29:58.739 [2024-12-10 05:55:16.557233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.739 [2024-12-10 05:55:16.594606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.995 Running I/O for 15 seconds... 00:30:01.294 11378.00 IOPS, 44.45 MiB/s [2024-12-10T04:55:19.518Z] 11341.50 IOPS, 44.30 MiB/s [2024-12-10T04:55:19.518Z] 05:55:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 300258 00:30:01.559 05:55:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:01.559 [2024-12-10 05:55:19.449593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.449990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.449997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.559 [2024-12-10 05:55:19.450182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.559 [2024-12-10 05:55:19.450189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.560 [2024-12-10 05:55:19.450868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.560 [2024-12-10 05:55:19.450875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.450993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.450999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.561 [2024-12-10 05:55:19.451389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.561 [2024-12-10 05:55:19.451404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.561 [2024-12-10 05:55:19.451420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.561 [2024-12-10 05:55:19.451433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.561 [2024-12-10 05:55:19.451441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.562 [2024-12-10 05:55:19.451448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.562 [2024-12-10 05:55:19.451462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.562 [2024-12-10 05:55:19.451477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.562 [2024-12-10 05:55:19.451491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.562 [2024-12-10 05:55:19.451666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.451673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c38d0 is same with the state(6) to be set 00:30:01.562 [2024-12-10 05:55:19.451681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:01.562 [2024-12-10 05:55:19.451687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:01.562 [2024-12-10 05:55:19.451692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102464 len:8 PRP1 0x0 PRP2 0x0 00:30:01.562 [2024-12-10 05:55:19.451700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.562 [2024-12-10 05:55:19.454543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.562 [2024-12-10 05:55:19.454596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.562 [2024-12-10 05:55:19.455146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.562 [2024-12-10 05:55:19.455162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.562 [2024-12-10 05:55:19.455169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.562 [2024-12-10 05:55:19.455349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.562 [2024-12-10 05:55:19.455523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.562 [2024-12-10 05:55:19.455531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.562 [2024-12-10 05:55:19.455539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.562 [2024-12-10 05:55:19.455547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.562 [2024-12-10 05:55:19.467723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.562 [2024-12-10 05:55:19.468087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.562 [2024-12-10 05:55:19.468104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.562 [2024-12-10 05:55:19.468116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.562 [2024-12-10 05:55:19.468291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.562 [2024-12-10 05:55:19.468460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.562 [2024-12-10 05:55:19.468468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.562 [2024-12-10 05:55:19.468474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.562 [2024-12-10 05:55:19.468481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.562 [2024-12-10 05:55:19.480630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.562 [2024-12-10 05:55:19.480914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.562 [2024-12-10 05:55:19.480930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.562 [2024-12-10 05:55:19.480937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.562 [2024-12-10 05:55:19.481105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.562 [2024-12-10 05:55:19.481282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.562 [2024-12-10 05:55:19.481291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.562 [2024-12-10 05:55:19.481298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.562 [2024-12-10 05:55:19.481305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.562 [2024-12-10 05:55:19.493487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.562 [2024-12-10 05:55:19.493932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.562 [2024-12-10 05:55:19.493984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.562 [2024-12-10 05:55:19.494008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.562 [2024-12-10 05:55:19.494605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.562 [2024-12-10 05:55:19.495076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.562 [2024-12-10 05:55:19.495084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.562 [2024-12-10 05:55:19.495091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.562 [2024-12-10 05:55:19.495097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.562 [2024-12-10 05:55:19.506905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.562 [2024-12-10 05:55:19.507295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.562 [2024-12-10 05:55:19.507315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.562 [2024-12-10 05:55:19.507324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.562 [2024-12-10 05:55:19.507533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.562 [2024-12-10 05:55:19.507747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.562 [2024-12-10 05:55:19.507757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.562 [2024-12-10 05:55:19.507764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.562 [2024-12-10 05:55:19.507772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.848 [2024-12-10 05:55:19.519963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.848 [2024-12-10 05:55:19.520372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.848 [2024-12-10 05:55:19.520389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.848 [2024-12-10 05:55:19.520397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.848 [2024-12-10 05:55:19.520570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.848 [2024-12-10 05:55:19.520743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.848 [2024-12-10 05:55:19.520751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.848 [2024-12-10 05:55:19.520758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.848 [2024-12-10 05:55:19.520764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.848 [2024-12-10 05:55:19.532980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.848 [2024-12-10 05:55:19.533274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.848 [2024-12-10 05:55:19.533292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.848 [2024-12-10 05:55:19.533299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.848 [2024-12-10 05:55:19.533471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.848 [2024-12-10 05:55:19.533645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.848 [2024-12-10 05:55:19.533653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.848 [2024-12-10 05:55:19.533659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.848 [2024-12-10 05:55:19.533665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.848 [2024-12-10 05:55:19.546029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.848 [2024-12-10 05:55:19.546386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.848 [2024-12-10 05:55:19.546404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.848 [2024-12-10 05:55:19.546411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.848 [2024-12-10 05:55:19.546584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.848 [2024-12-10 05:55:19.546758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.848 [2024-12-10 05:55:19.546766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.848 [2024-12-10 05:55:19.546777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.848 [2024-12-10 05:55:19.546784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.848 [2024-12-10 05:55:19.558957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.848 [2024-12-10 05:55:19.559304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.848 [2024-12-10 05:55:19.559321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.848 [2024-12-10 05:55:19.559328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.848 [2024-12-10 05:55:19.559496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.848 [2024-12-10 05:55:19.559663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.848 [2024-12-10 05:55:19.559672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.848 [2024-12-10 05:55:19.559678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.848 [2024-12-10 05:55:19.559684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.848 [2024-12-10 05:55:19.572020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.848 [2024-12-10 05:55:19.572316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.848 [2024-12-10 05:55:19.572333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.848 [2024-12-10 05:55:19.572341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.848 [2024-12-10 05:55:19.572515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.848 [2024-12-10 05:55:19.572694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.848 [2024-12-10 05:55:19.572702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.848 [2024-12-10 05:55:19.572708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.848 [2024-12-10 05:55:19.572715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.848 [2024-12-10 05:55:19.584904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.848 [2024-12-10 05:55:19.585266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.848 [2024-12-10 05:55:19.585315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.848 [2024-12-10 05:55:19.585340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.848 [2024-12-10 05:55:19.585921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.848 [2024-12-10 05:55:19.586137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.586144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.586151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.586158] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.597677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.598032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.598049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.598056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.598229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.598398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.598406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.598412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.598418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.610631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.610923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.610940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.610947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.611113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.611287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.611295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.611302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.611308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.623460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.623765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.623782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.623789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.623956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.624123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.624131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.624137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.624144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.636382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.636729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.636746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.636756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.636929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.637101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.637110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.637116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.637123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.649288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.649642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.649658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.649665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.649831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.649998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.650007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.650013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.650019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.662211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.662498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.662514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.662521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.662689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.662857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.662866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.662872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.662878] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.675004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.675417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.675435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.675442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.675609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.675780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.675788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.675794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.675800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.687788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.688065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.688082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.688089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.688262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.688429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.688437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.688443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.688449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.700693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.701020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.701036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.701043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.701211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.701383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.701393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.849 [2024-12-10 05:55:19.701400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.849 [2024-12-10 05:55:19.701405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.849 [2024-12-10 05:55:19.713675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.849 [2024-12-10 05:55:19.714013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.849 [2024-12-10 05:55:19.714030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.849 [2024-12-10 05:55:19.714039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.849 [2024-12-10 05:55:19.714212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.849 [2024-12-10 05:55:19.714392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.849 [2024-12-10 05:55:19.714402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.714414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.714420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.850 [2024-12-10 05:55:19.726782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.850 [2024-12-10 05:55:19.727084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.850 [2024-12-10 05:55:19.727102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.850 [2024-12-10 05:55:19.727110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.850 [2024-12-10 05:55:19.727287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.850 [2024-12-10 05:55:19.727461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.850 [2024-12-10 05:55:19.727470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.727479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.727486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.850 [2024-12-10 05:55:19.739844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.850 [2024-12-10 05:55:19.740111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.850 [2024-12-10 05:55:19.740133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.850 [2024-12-10 05:55:19.740140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.850 [2024-12-10 05:55:19.740319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.850 [2024-12-10 05:55:19.740492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.850 [2024-12-10 05:55:19.740500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.740507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.740513] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.850 [2024-12-10 05:55:19.752841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.850 [2024-12-10 05:55:19.753329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.850 [2024-12-10 05:55:19.753376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.850 [2024-12-10 05:55:19.753399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.850 [2024-12-10 05:55:19.753982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.850 [2024-12-10 05:55:19.754574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.850 [2024-12-10 05:55:19.754604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.754611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.754617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.850 [2024-12-10 05:55:19.765874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.850 [2024-12-10 05:55:19.766276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.850 [2024-12-10 05:55:19.766293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.850 [2024-12-10 05:55:19.766300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.850 [2024-12-10 05:55:19.766468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.850 [2024-12-10 05:55:19.766635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.850 [2024-12-10 05:55:19.766643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.766649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.766656] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.850 [2024-12-10 05:55:19.778730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.850 [2024-12-10 05:55:19.779148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.850 [2024-12-10 05:55:19.779164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.850 [2024-12-10 05:55:19.779171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.850 [2024-12-10 05:55:19.779352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.850 [2024-12-10 05:55:19.779521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.850 [2024-12-10 05:55:19.779529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.779536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.779542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.850 [2024-12-10 05:55:19.791794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.850 [2024-12-10 05:55:19.792149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.850 [2024-12-10 05:55:19.792166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:01.850 [2024-12-10 05:55:19.792174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:01.850 [2024-12-10 05:55:19.792352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:01.850 [2024-12-10 05:55:19.792525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.850 [2024-12-10 05:55:19.792533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.850 [2024-12-10 05:55:19.792540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.850 [2024-12-10 05:55:19.792546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.127 [2024-12-10 05:55:19.804791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.127 [2024-12-10 05:55:19.805212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-10 05:55:19.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.127 [2024-12-10 05:55:19.805245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.127 [2024-12-10 05:55:19.805413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.127 [2024-12-10 05:55:19.805603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.127 [2024-12-10 05:55:19.805611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.127 [2024-12-10 05:55:19.805618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.127 [2024-12-10 05:55:19.805624] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.127 [2024-12-10 05:55:19.817838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.127 [2024-12-10 05:55:19.818225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-10 05:55:19.818244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.127 [2024-12-10 05:55:19.818252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.127 [2024-12-10 05:55:19.818424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.818597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.818604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.818610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.818617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.830856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.831259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.831276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.831283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.831451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.831617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.831625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.831632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.831638] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.843710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.844119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.844136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.844143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.844315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.844490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.844498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.844504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.844510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.856479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.856905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.856921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.856928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.857096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.857269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.857278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.857285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.857291] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.869258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.869650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.869666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.869672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.869830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.869989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.869996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.870002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.870008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.881998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.882430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.882476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.882499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.882997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.883165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.883173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.883183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.883189] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 9690.67 IOPS, 37.85 MiB/s [2024-12-10T04:55:20.087Z] [2024-12-10 05:55:19.894833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.895255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.895272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.895279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.895446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.895613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.895621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.895627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.895633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.907636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.908024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.908040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.908047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.908206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.908393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.908402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.908408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.908414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.920376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.920812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.920829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.920836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.921003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.921170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.921177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.921183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.921189] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.933223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.933623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-10 05:55:19.933667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.128 [2024-12-10 05:55:19.933690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.128 [2024-12-10 05:55:19.934137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.128 [2024-12-10 05:55:19.934321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.128 [2024-12-10 05:55:19.934330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.128 [2024-12-10 05:55:19.934336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.128 [2024-12-10 05:55:19.934342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.128 [2024-12-10 05:55:19.945945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.128 [2024-12-10 05:55:19.946354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:19.946371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:19.946378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:19.946546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:19.946717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:19.946725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:19.946731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:19.946737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:19.959019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:19.959472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:19.959489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:19.959497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:19.959672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:19.959845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:19.959854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:19.959861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:19.959867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:19.971986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:19.972429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:19.972446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:19.972457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:19.972625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:19.972796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:19.972804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:19.972811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:19.972817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:19.984834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:19.985252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:19.985297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:19.985321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:19.985903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:19.986355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:19.986363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:19.986370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:19.986377] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:19.997658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:19.998081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:19.998098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:19.998105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:19.998286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:19.998454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:19.998462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:19.998468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:19.998474] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:20.010941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:20.011296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:20.011314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:20.011322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:20.011506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:20.011693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:20.011702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:20.011708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:20.011715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:20.023901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:20.024309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:20.024326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:20.024333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:20.024506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:20.024680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:20.024688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:20.024694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:20.024701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:20.036860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:20.037307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:20.037325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:20.037333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:20.037507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:20.037681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:20.037689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:20.037697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:20.037703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:20.049799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:20.050234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:20.050251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:20.050259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:20.050432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:20.050605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:20.050613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:20.050623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:20.050629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:20.063021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:20.063439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.129 [2024-12-10 05:55:20.063457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.129 [2024-12-10 05:55:20.063464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.129 [2024-12-10 05:55:20.063637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.129 [2024-12-10 05:55:20.063809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.129 [2024-12-10 05:55:20.063817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.129 [2024-12-10 05:55:20.063823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.129 [2024-12-10 05:55:20.063830] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.129 [2024-12-10 05:55:20.076039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.129 [2024-12-10 05:55:20.076452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.130 [2024-12-10 05:55:20.076469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.130 [2024-12-10 05:55:20.076477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.130 [2024-12-10 05:55:20.076650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.130 [2024-12-10 05:55:20.076822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.130 [2024-12-10 05:55:20.076830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.130 [2024-12-10 05:55:20.076837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.130 [2024-12-10 05:55:20.076843] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.399 [2024-12-10 05:55:20.089066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.399 [2024-12-10 05:55:20.089479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-10 05:55:20.089497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.399 [2024-12-10 05:55:20.089504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.399 [2024-12-10 05:55:20.089677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.399 [2024-12-10 05:55:20.089849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.399 [2024-12-10 05:55:20.089857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.399 [2024-12-10 05:55:20.089863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.399 [2024-12-10 05:55:20.089869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.399 [2024-12-10 05:55:20.102080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.399 [2024-12-10 05:55:20.102491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-10 05:55:20.102508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.399 [2024-12-10 05:55:20.102516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.399 [2024-12-10 05:55:20.102689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.399 [2024-12-10 05:55:20.102862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.399 [2024-12-10 05:55:20.102870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.399 [2024-12-10 05:55:20.102876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.399 [2024-12-10 05:55:20.102882] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.399 [2024-12-10 05:55:20.115067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.399 [2024-12-10 05:55:20.115416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-10 05:55:20.115433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.399 [2024-12-10 05:55:20.115441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.399 [2024-12-10 05:55:20.115608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.399 [2024-12-10 05:55:20.115776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.399 [2024-12-10 05:55:20.115784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.399 [2024-12-10 05:55:20.115790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.399 [2024-12-10 05:55:20.115797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.399 [2024-12-10 05:55:20.127919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.399 [2024-12-10 05:55:20.128349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-10 05:55:20.128394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.399 [2024-12-10 05:55:20.128417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.399 [2024-12-10 05:55:20.128999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.399 [2024-12-10 05:55:20.129477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.399 [2024-12-10 05:55:20.129486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.399 [2024-12-10 05:55:20.129492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.399 [2024-12-10 05:55:20.129498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.399 [2024-12-10 05:55:20.140947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.399 [2024-12-10 05:55:20.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-10 05:55:20.141387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.399 [2024-12-10 05:55:20.141417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.399 [2024-12-10 05:55:20.142002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.399 [2024-12-10 05:55:20.142521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.399 [2024-12-10 05:55:20.142529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.399 [2024-12-10 05:55:20.142535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.399 [2024-12-10 05:55:20.142541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.399 [2024-12-10 05:55:20.154124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.399 [2024-12-10 05:55:20.154524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-10 05:55:20.154541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.399 [2024-12-10 05:55:20.154548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.399 [2024-12-10 05:55:20.154716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.399 [2024-12-10 05:55:20.154883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.154891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.154897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.154903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.166973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.167394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.167411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.167418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.167586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.167753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.167761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.167767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.167773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.179716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.180147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.180192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.180215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.180694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.180865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.180873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.180879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.180885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.192564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.192992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.193009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.193016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.193183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.193358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.193367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.193373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.193379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.205298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.205749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.205794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.205818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.206416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.206930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.206938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.206944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.206950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.218187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.218572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.218590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.218599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.218767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.218934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.218943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.218954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.218961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.231256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.231701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.231709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.231883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.232082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.232090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.232097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.232103] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.244103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.244545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.244561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.244569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.244737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.244904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.244911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.244918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.244924] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.256956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.257393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.257432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.257457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.258009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.258177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.258185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.258191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.258197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.269831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.270250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.270267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.270274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.270442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.270610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.270618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.270624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.400 [2024-12-10 05:55:20.270630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.400 [2024-12-10 05:55:20.282681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.400 [2024-12-10 05:55:20.283103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-10 05:55:20.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.400 [2024-12-10 05:55:20.283127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.400 [2024-12-10 05:55:20.283302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.400 [2024-12-10 05:55:20.283470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.400 [2024-12-10 05:55:20.283477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.400 [2024-12-10 05:55:20.283484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.401 [2024-12-10 05:55:20.283489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.401 [2024-12-10 05:55:20.295465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.401 [2024-12-10 05:55:20.295846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-10 05:55:20.295890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.401 [2024-12-10 05:55:20.295912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.401 [2024-12-10 05:55:20.296510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.401 [2024-12-10 05:55:20.297028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.401 [2024-12-10 05:55:20.297036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.401 [2024-12-10 05:55:20.297042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.401 [2024-12-10 05:55:20.297048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.401 [2024-12-10 05:55:20.308332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.401 [2024-12-10 05:55:20.308770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-10 05:55:20.308808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.401 [2024-12-10 05:55:20.308841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.401 [2024-12-10 05:55:20.309438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.401 [2024-12-10 05:55:20.309906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.401 [2024-12-10 05:55:20.309914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.401 [2024-12-10 05:55:20.309920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.401 [2024-12-10 05:55:20.309926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.401 [2024-12-10 05:55:20.321055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.401 [2024-12-10 05:55:20.321503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-10 05:55:20.321548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.401 [2024-12-10 05:55:20.321571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.401 [2024-12-10 05:55:20.322061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.401 [2024-12-10 05:55:20.322236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.401 [2024-12-10 05:55:20.322244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.401 [2024-12-10 05:55:20.322251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.401 [2024-12-10 05:55:20.322257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.401 [2024-12-10 05:55:20.333888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.401 [2024-12-10 05:55:20.334296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-10 05:55:20.334313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.401 [2024-12-10 05:55:20.334320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.401 [2024-12-10 05:55:20.334487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.401 [2024-12-10 05:55:20.334656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.401 [2024-12-10 05:55:20.334664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.401 [2024-12-10 05:55:20.334670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.401 [2024-12-10 05:55:20.334676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.401 [2024-12-10 05:55:20.346858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.401 [2024-12-10 05:55:20.347266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-10 05:55:20.347282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.401 [2024-12-10 05:55:20.347290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.401 [2024-12-10 05:55:20.347462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.401 [2024-12-10 05:55:20.347637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.401 [2024-12-10 05:55:20.347645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.401 [2024-12-10 05:55:20.347651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.401 [2024-12-10 05:55:20.347658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.661 [2024-12-10 05:55:20.359794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.661 [2024-12-10 05:55:20.360160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-12-10 05:55:20.360176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.661 [2024-12-10 05:55:20.360183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.661 [2024-12-10 05:55:20.360357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.661 [2024-12-10 05:55:20.360526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.661 [2024-12-10 05:55:20.360533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.661 [2024-12-10 05:55:20.360540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.661 [2024-12-10 05:55:20.360546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.661 [2024-12-10 05:55:20.372669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.661 [2024-12-10 05:55:20.373013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-12-10 05:55:20.373042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.661 [2024-12-10 05:55:20.373066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.661 [2024-12-10 05:55:20.373664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.661 [2024-12-10 05:55:20.374092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.661 [2024-12-10 05:55:20.374110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.661 [2024-12-10 05:55:20.374124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.661 [2024-12-10 05:55:20.374137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.661 [2024-12-10 05:55:20.387766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.661 [2024-12-10 05:55:20.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-12-10 05:55:20.388270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.661 [2024-12-10 05:55:20.388280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.661 [2024-12-10 05:55:20.388533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.661 [2024-12-10 05:55:20.388790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.661 [2024-12-10 05:55:20.388801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.661 [2024-12-10 05:55:20.388814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.661 [2024-12-10 05:55:20.388823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.661 [2024-12-10 05:55:20.400818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.661 [2024-12-10 05:55:20.401225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-12-10 05:55:20.401243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.661 [2024-12-10 05:55:20.401250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.661 [2024-12-10 05:55:20.401423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.661 [2024-12-10 05:55:20.401596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.661 [2024-12-10 05:55:20.401604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.661 [2024-12-10 05:55:20.401610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.661 [2024-12-10 05:55:20.401616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.661 [2024-12-10 05:55:20.413580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.661 [2024-12-10 05:55:20.413977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-12-10 05:55:20.413993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.661 [2024-12-10 05:55:20.414000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.414168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.414342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.414351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.414357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.414363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.426403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.426818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.426835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.426842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.427009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.427177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.427184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.427191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.427197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.439247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.439658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.439674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.439681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.439848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.440015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.440023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.440030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.440035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.452069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.452438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.452454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.452461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.452628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.452796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.452803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.452809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.452815] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.464883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.465269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.465286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.465293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.465461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.465628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.465636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.465642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.465648] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.477729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.478167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.478184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.478196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.478373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.478541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.478549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.478555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.478561] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.490766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.491124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.491141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.491149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.491328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.491501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.491509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.491515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.491521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.503580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.503995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.504011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.504018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.504185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.504359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.504368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.504374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.504380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.516313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.516724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.516740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.516747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.516914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.517085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.517093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.517099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.517105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.529194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.529663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.529707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.529730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.530201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.662 [2024-12-10 05:55:20.530375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.662 [2024-12-10 05:55:20.530384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.662 [2024-12-10 05:55:20.530390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.662 [2024-12-10 05:55:20.530396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.662 [2024-12-10 05:55:20.542010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.662 [2024-12-10 05:55:20.542405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-12-10 05:55:20.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.662 [2024-12-10 05:55:20.542429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.662 [2024-12-10 05:55:20.542596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.663 [2024-12-10 05:55:20.542763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.663 [2024-12-10 05:55:20.542771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.663 [2024-12-10 05:55:20.542777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.663 [2024-12-10 05:55:20.542783] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.663 [2024-12-10 05:55:20.554971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.663 [2024-12-10 05:55:20.555329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-12-10 05:55:20.555346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.663 [2024-12-10 05:55:20.555354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.663 [2024-12-10 05:55:20.555526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.663 [2024-12-10 05:55:20.555700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.663 [2024-12-10 05:55:20.555708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.663 [2024-12-10 05:55:20.555718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.663 [2024-12-10 05:55:20.555724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.663 [2024-12-10 05:55:20.567936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.663 [2024-12-10 05:55:20.568264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-12-10 05:55:20.568281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.663 [2024-12-10 05:55:20.568288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.663 [2024-12-10 05:55:20.568456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.663 [2024-12-10 05:55:20.568623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.663 [2024-12-10 05:55:20.568631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.663 [2024-12-10 05:55:20.568637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.663 [2024-12-10 05:55:20.568643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.663 [2024-12-10 05:55:20.580784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.663 [2024-12-10 05:55:20.581235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-12-10 05:55:20.581283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.663 [2024-12-10 05:55:20.581307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.663 [2024-12-10 05:55:20.581795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.663 [2024-12-10 05:55:20.581963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.663 [2024-12-10 05:55:20.581971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.663 [2024-12-10 05:55:20.581977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.663 [2024-12-10 05:55:20.581983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.663 [2024-12-10 05:55:20.593549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.663 [2024-12-10 05:55:20.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-12-10 05:55:20.593999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.663 [2024-12-10 05:55:20.594006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.663 [2024-12-10 05:55:20.594174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.663 [2024-12-10 05:55:20.594347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.663 [2024-12-10 05:55:20.594356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.663 [2024-12-10 05:55:20.594362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.663 [2024-12-10 05:55:20.594368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.663 [2024-12-10 05:55:20.606355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.663 [2024-12-10 05:55:20.606756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-12-10 05:55:20.606800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.663 [2024-12-10 05:55:20.606823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.663 [2024-12-10 05:55:20.607423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.663 [2024-12-10 05:55:20.607836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.663 [2024-12-10 05:55:20.607844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.663 [2024-12-10 05:55:20.607850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.663 [2024-12-10 05:55:20.607857] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.923 [2024-12-10 05:55:20.619308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.923 [2024-12-10 05:55:20.619714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.923 [2024-12-10 05:55:20.619731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.923 [2024-12-10 05:55:20.619739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.923 [2024-12-10 05:55:20.619912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.923 [2024-12-10 05:55:20.620085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.923 [2024-12-10 05:55:20.620093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.923 [2024-12-10 05:55:20.620100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.923 [2024-12-10 05:55:20.620106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.923 [2024-12-10 05:55:20.632117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.923 [2024-12-10 05:55:20.632529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.923 [2024-12-10 05:55:20.632546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.923 [2024-12-10 05:55:20.632553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.923 [2024-12-10 05:55:20.632721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.923 [2024-12-10 05:55:20.632889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.632897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.632903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.632910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.645055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.645458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.645475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.645486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.645654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.645822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.645830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.645835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.645841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.657874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.658309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.658326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.658333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.658500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.658668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.658676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.658682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.658688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.670639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.671070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.671087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.671094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.671268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.671437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.671445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.671452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.671458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.683435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.683753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.683790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.683815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.684413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.684906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.684913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.684919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.684926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.696260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.696656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.696672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.696678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.696837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.696995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.697003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.697009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.697014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.709053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.709476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.709521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.709545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.710068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.710242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.710251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.710257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.710263] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.721846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.722187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.722203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.722210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.722385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.722553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.722561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.722570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.722576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.734806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.735226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.735244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.735252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.735425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.735597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.735606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.735612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.735618] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.747839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.748205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.748228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.748236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.924 [2024-12-10 05:55:20.748410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.924 [2024-12-10 05:55:20.748583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.924 [2024-12-10 05:55:20.748591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.924 [2024-12-10 05:55:20.748597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.924 [2024-12-10 05:55:20.748603] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.924 [2024-12-10 05:55:20.760816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.924 [2024-12-10 05:55:20.761244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.924 [2024-12-10 05:55:20.761262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.924 [2024-12-10 05:55:20.761269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.761443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.761620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.761628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.761635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.761641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.773853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.774288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.774306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.774314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.774487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.774661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.774669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.774676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.774682] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.787050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.787387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.787405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.787412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.787584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.787756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.787765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.787771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.787777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.800127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.800555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.800572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.800579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.800751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.800928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.800936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.800942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.800949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.813441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.813855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.813873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.813884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.814067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.814257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.814266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.814272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.814279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.826756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.827208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.827268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.827293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.827875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.828321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.828330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.828336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.828342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.839723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.840067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.840083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.840090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.840267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.840440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.840448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.840454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.840461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.852711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.852993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.853009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.853016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.853188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.853369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.853377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.853384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.853390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.925 [2024-12-10 05:55:20.865723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.925 [2024-12-10 05:55:20.866123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.925 [2024-12-10 05:55:20.866168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:02.925 [2024-12-10 05:55:20.866192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:02.925 [2024-12-10 05:55:20.866789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:02.925 [2024-12-10 05:55:20.867398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.925 [2024-12-10 05:55:20.867407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.925 [2024-12-10 05:55:20.867413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.925 [2024-12-10 05:55:20.867420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.185 [2024-12-10 05:55:20.878707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.185 [2024-12-10 05:55:20.879040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.185 [2024-12-10 05:55:20.879057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.185 [2024-12-10 05:55:20.879064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.185 [2024-12-10 05:55:20.879236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.185 [2024-12-10 05:55:20.879404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.185 [2024-12-10 05:55:20.879412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.185 [2024-12-10 05:55:20.879418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.185 [2024-12-10 05:55:20.879424] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.185 7268.00 IOPS, 28.39 MiB/s [2024-12-10T04:55:21.144Z] [2024-12-10 05:55:20.892878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.185 [2024-12-10 05:55:20.893231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.185 [2024-12-10 05:55:20.893248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.185 [2024-12-10 05:55:20.893255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.185 [2024-12-10 05:55:20.893422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.893590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.893598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.893607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.893613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.905683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.906015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.906031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.906038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.906206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.906379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.906388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.906394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.906400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.918597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.919008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.919024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.919032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.919198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.919373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.919381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.919388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.919394] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.931551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.931906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.931951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.931974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.932452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.932622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.932630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.932636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.932642] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.944356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.944683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.944700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.944707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.944874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.945042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.945050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.945056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.945062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.957309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.957642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.957659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.957666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.957838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.958010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.958018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.958024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.958030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.970117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.970473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.970490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.970497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.970663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.970831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.970839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.970845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.970851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.982961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.983386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.983406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.983413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.983584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.983747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.983754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.983760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.983766] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:20.995899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:20.996181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:20.996198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:20.996206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:20.996379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:20.996548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:20.996556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:20.996562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:20.996569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:21.008884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:21.009239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:21.009258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:21.009266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:21.009439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:21.009612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:21.009620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.186 [2024-12-10 05:55:21.009626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.186 [2024-12-10 05:55:21.009632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.186 [2024-12-10 05:55:21.021820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.186 [2024-12-10 05:55:21.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.186 [2024-12-10 05:55:21.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.186 [2024-12-10 05:55:21.022129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.186 [2024-12-10 05:55:21.022302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.186 [2024-12-10 05:55:21.022473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.186 [2024-12-10 05:55:21.022481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.022487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.022493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.034818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.035170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.035187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.035194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.035371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.035552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.035560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.035566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.035572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.047734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.048030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.048047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.048054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.048232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.048404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.048412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.048419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.048425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.060650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.061002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.061018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.061025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.061193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.061385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.061394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.061404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.061410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.073616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.074056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.074100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.074124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.074649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.074832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.074840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.074846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.074853] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.086625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.086891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.086946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.086969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.087564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.087799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.087808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.087814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.087820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.099512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.099789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.099805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.099812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.099980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.100147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.100156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.100162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.100168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.112623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.112957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.112973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.112980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.113152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.113363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.113373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.113379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.113386] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.187 [2024-12-10 05:55:21.125591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.187 [2024-12-10 05:55:21.125879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.187 [2024-12-10 05:55:21.125896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.187 [2024-12-10 05:55:21.125903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.187 [2024-12-10 05:55:21.126076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.187 [2024-12-10 05:55:21.126255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.187 [2024-12-10 05:55:21.126264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.187 [2024-12-10 05:55:21.126271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.187 [2024-12-10 05:55:21.126277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.447 [2024-12-10 05:55:21.138671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.447 [2024-12-10 05:55:21.138960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.447 [2024-12-10 05:55:21.138976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.447 [2024-12-10 05:55:21.138984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.447 [2024-12-10 05:55:21.139155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.447 [2024-12-10 05:55:21.139334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.447 [2024-12-10 05:55:21.139342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.447 [2024-12-10 05:55:21.139348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.447 [2024-12-10 05:55:21.139355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.447 [2024-12-10 05:55:21.151428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.447 [2024-12-10 05:55:21.151831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.447 [2024-12-10 05:55:21.151875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.447 [2024-12-10 05:55:21.151905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.447 [2024-12-10 05:55:21.152502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.447 [2024-12-10 05:55:21.153086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.447 [2024-12-10 05:55:21.153103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.447 [2024-12-10 05:55:21.153116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.447 [2024-12-10 05:55:21.153129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.447 [2024-12-10 05:55:21.166508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.447 [2024-12-10 05:55:21.166863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.447 [2024-12-10 05:55:21.166884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.447 [2024-12-10 05:55:21.166895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.447 [2024-12-10 05:55:21.167148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.447 [2024-12-10 05:55:21.167408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.447 [2024-12-10 05:55:21.167420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.447 [2024-12-10 05:55:21.167430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.447 [2024-12-10 05:55:21.167439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.447 [2024-12-10 05:55:21.179509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.447 [2024-12-10 05:55:21.179933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.447 [2024-12-10 05:55:21.179949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.447 [2024-12-10 05:55:21.179956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.447 [2024-12-10 05:55:21.180122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.447 [2024-12-10 05:55:21.180304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.447 [2024-12-10 05:55:21.180313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.447 [2024-12-10 05:55:21.180319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.447 [2024-12-10 05:55:21.180325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.447 [2024-12-10 05:55:21.192489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.447 [2024-12-10 05:55:21.192932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.447 [2024-12-10 05:55:21.192948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.192956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.193123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.193303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.193312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.193318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.193323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.205332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.205739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.205784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.205807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.206226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.206409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.206417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.206423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.206429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.218154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.218507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.218524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.218531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.218699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.218867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.218875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.218881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.218887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.230957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.231380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.231424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.231447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.232028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.232208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.232215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.232231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.232237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.243788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.244120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.244136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.244143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.244326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.244494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.244502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.244508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.244514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.256535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.256956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.256973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.256980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.257139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.257323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.257332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.257338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.257344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.269700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.270144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.270161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.270168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.270346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.270518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.270526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.270533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.270539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.282627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.283063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.283080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.283087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.283260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.283428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.283436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.283442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.283448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.295353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.295691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.295706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.295713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.295872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.296030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.296037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.296043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.296049] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.308115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.448 [2024-12-10 05:55:21.308554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.448 [2024-12-10 05:55:21.308571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.448 [2024-12-10 05:55:21.308578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.448 [2024-12-10 05:55:21.308745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.448 [2024-12-10 05:55:21.308913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.448 [2024-12-10 05:55:21.308921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.448 [2024-12-10 05:55:21.308927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.448 [2024-12-10 05:55:21.308933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.448 [2024-12-10 05:55:21.320914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.449 [2024-12-10 05:55:21.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.449 [2024-12-10 05:55:21.321263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.449 [2024-12-10 05:55:21.321273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.449 [2024-12-10 05:55:21.321442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.449 [2024-12-10 05:55:21.321610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.449 [2024-12-10 05:55:21.321618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.449 [2024-12-10 05:55:21.321625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.449 [2024-12-10 05:55:21.321630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.449 [2024-12-10 05:55:21.333683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.449 [2024-12-10 05:55:21.334102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.449 [2024-12-10 05:55:21.334119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.449 [2024-12-10 05:55:21.334125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.449 [2024-12-10 05:55:21.334308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.449 [2024-12-10 05:55:21.334475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.449 [2024-12-10 05:55:21.334483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.449 [2024-12-10 05:55:21.334489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.449 [2024-12-10 05:55:21.334495] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.449 [2024-12-10 05:55:21.346513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.449 [2024-12-10 05:55:21.346925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.449 [2024-12-10 05:55:21.346940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.449 [2024-12-10 05:55:21.346947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.449 [2024-12-10 05:55:21.347105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.449 [2024-12-10 05:55:21.347286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.449 [2024-12-10 05:55:21.347295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.449 [2024-12-10 05:55:21.347301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.449 [2024-12-10 05:55:21.347307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.449 [2024-12-10 05:55:21.359460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.449 [2024-12-10 05:55:21.359875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.449 [2024-12-10 05:55:21.359891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.449 [2024-12-10 05:55:21.359898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.449 [2024-12-10 05:55:21.360066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.449 [2024-12-10 05:55:21.360243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.449 [2024-12-10 05:55:21.360252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.449 [2024-12-10 05:55:21.360258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.449 [2024-12-10 05:55:21.360265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.449 [2024-12-10 05:55:21.372292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.449 [2024-12-10 05:55:21.372724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.449 [2024-12-10 05:55:21.372741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.449 [2024-12-10 05:55:21.372748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.449 [2024-12-10 05:55:21.372915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.449 [2024-12-10 05:55:21.373083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.449 [2024-12-10 05:55:21.373090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.449 [2024-12-10 05:55:21.373096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.449 [2024-12-10 05:55:21.373102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.449 [2024-12-10 05:55:21.385114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.449 [2024-12-10 05:55:21.385575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.449 [2024-12-10 05:55:21.385621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.449 [2024-12-10 05:55:21.385644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.449 [2024-12-10 05:55:21.386200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.449 [2024-12-10 05:55:21.386600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.449 [2024-12-10 05:55:21.386617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.449 [2024-12-10 05:55:21.386631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.449 [2024-12-10 05:55:21.386644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.399946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.400457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.400502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.400525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.401108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.401711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.401723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.401736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.401745] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.412872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.413298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.413315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.413323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.413490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.413658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.413667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.413673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.413679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.425669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.425997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.426013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.426019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.426178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.426363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.426372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.426378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.426384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.438407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.438821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.438837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.438844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.439002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.439161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.439169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.439175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.439180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.451280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.451625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.451641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.451647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.451806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.451965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.451973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.451979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.451984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.464108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.464545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.464560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.464567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.464735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.464902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.464910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.464916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.464922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.477011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.477457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.477473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.477480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.477653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.477820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.477828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.477834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.477840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.489898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.490241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.490258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.490268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.490427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.490585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.490593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.490598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.490604] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.502786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.503201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.503222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.503230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.503412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.503581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.503589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.503595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.710 [2024-12-10 05:55:21.503601] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.710 [2024-12-10 05:55:21.515539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.710 [2024-12-10 05:55:21.515974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-12-10 05:55:21.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.710 [2024-12-10 05:55:21.515998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.710 [2024-12-10 05:55:21.516166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.710 [2024-12-10 05:55:21.516357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.710 [2024-12-10 05:55:21.516365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.710 [2024-12-10 05:55:21.516372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.516378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.528562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.528920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.528936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.528943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.529116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.529301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.529311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.529319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.529326] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.541364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.541777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.541800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.541958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.542116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.542123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.542129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.542135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.554119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.554570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.554615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.554637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.555064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.555238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.555247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.555253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.555259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.567050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.567390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.567407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.567414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.567581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.567748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.567756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.567765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.567772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.579909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.580223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.580239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.580262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.580430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.580598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.580606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.580612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.580619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.592744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.593196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.593253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.593277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.593859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.594296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.594305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.594311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.594318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.605486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.605908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.605924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.605931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.606089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.606267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.606276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.606281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.606288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.618355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.618775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.618791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.618797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.618956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.619113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.619121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.619127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.619132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.631212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.631678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.631722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.631745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.632183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.632370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.632378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.632385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.632391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.711 [2024-12-10 05:55:21.643996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.711 [2024-12-10 05:55:21.644410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-12-10 05:55:21.644427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.711 [2024-12-10 05:55:21.644433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.711 [2024-12-10 05:55:21.644592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.711 [2024-12-10 05:55:21.644751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.711 [2024-12-10 05:55:21.644758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.711 [2024-12-10 05:55:21.644764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.711 [2024-12-10 05:55:21.644770] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.712 [2024-12-10 05:55:21.656841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.712 [2024-12-10 05:55:21.657292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.712 [2024-12-10 05:55:21.657310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.712 [2024-12-10 05:55:21.657320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.712 [2024-12-10 05:55:21.657493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.712 [2024-12-10 05:55:21.657667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.712 [2024-12-10 05:55:21.657674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.712 [2024-12-10 05:55:21.657680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.712 [2024-12-10 05:55:21.657686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.972 [2024-12-10 05:55:21.669855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.972 [2024-12-10 05:55:21.670267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.972 [2024-12-10 05:55:21.670283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.972 [2024-12-10 05:55:21.670290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.972 [2024-12-10 05:55:21.670449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.972 [2024-12-10 05:55:21.670607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.972 [2024-12-10 05:55:21.670615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.972 [2024-12-10 05:55:21.670621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.972 [2024-12-10 05:55:21.670627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.972 [2024-12-10 05:55:21.682670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.972 [2024-12-10 05:55:21.683097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.972 [2024-12-10 05:55:21.683112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.972 [2024-12-10 05:55:21.683119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.972 [2024-12-10 05:55:21.683308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.972 [2024-12-10 05:55:21.683476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.972 [2024-12-10 05:55:21.683484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.972 [2024-12-10 05:55:21.683490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.972 [2024-12-10 05:55:21.683496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.972 [2024-12-10 05:55:21.695466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.972 [2024-12-10 05:55:21.695917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.972 [2024-12-10 05:55:21.695961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.972 [2024-12-10 05:55:21.695983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.972 [2024-12-10 05:55:21.696437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.972 [2024-12-10 05:55:21.696609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.972 [2024-12-10 05:55:21.696617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.972 [2024-12-10 05:55:21.696623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.972 [2024-12-10 05:55:21.696629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.972 [2024-12-10 05:55:21.708402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.972 [2024-12-10 05:55:21.708821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.972 [2024-12-10 05:55:21.708837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.972 [2024-12-10 05:55:21.708844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.972 [2024-12-10 05:55:21.709012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.972 [2024-12-10 05:55:21.709179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.972 [2024-12-10 05:55:21.709187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.972 [2024-12-10 05:55:21.709193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.972 [2024-12-10 05:55:21.709199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.972 [2024-12-10 05:55:21.721153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.972 [2024-12-10 05:55:21.721599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.972 [2024-12-10 05:55:21.721643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.972 [2024-12-10 05:55:21.721665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.722097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.722269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.722277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.722284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.722290] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.733901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.734350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.734366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.734373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.734532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.734692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.734699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.734710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.734716] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.746741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.747160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.747176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.747183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.747387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.747559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.747567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.747573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.747579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.759635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.760049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.760087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.760111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.760681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.760851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.760859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.760865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.760871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.772467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.772807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.772824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.772831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.772998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.773167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.773177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.773185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.773191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.785442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.785866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.785883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.785890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.786062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.786240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.786249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.786256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.786262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.798477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.798831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.798848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.798855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.799028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.799201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.799209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.799216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.799227] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.811540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.811962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.811978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.811985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.812152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.812343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.812352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.812358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.812364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.824322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.824759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.824798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.824830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.825439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.825608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.825616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.825622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.825628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.837151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.837509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.837525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.837532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.837700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.837867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.973 [2024-12-10 05:55:21.837875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.973 [2024-12-10 05:55:21.837881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.973 [2024-12-10 05:55:21.837887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.973 [2024-12-10 05:55:21.849966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.973 [2024-12-10 05:55:21.850359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.973 [2024-12-10 05:55:21.850376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.973 [2024-12-10 05:55:21.850383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.973 [2024-12-10 05:55:21.850542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.973 [2024-12-10 05:55:21.850700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.974 [2024-12-10 05:55:21.850708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.974 [2024-12-10 05:55:21.850714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.974 [2024-12-10 05:55:21.850720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.974 [2024-12-10 05:55:21.862704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.974 [2024-12-10 05:55:21.863115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.974 [2024-12-10 05:55:21.863131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.974 [2024-12-10 05:55:21.863138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.974 [2024-12-10 05:55:21.863320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.974 [2024-12-10 05:55:21.863491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.974 [2024-12-10 05:55:21.863499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.974 [2024-12-10 05:55:21.863505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.974 [2024-12-10 05:55:21.863511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.974 [2024-12-10 05:55:21.875625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.974 [2024-12-10 05:55:21.876038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.974 [2024-12-10 05:55:21.876054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.974 [2024-12-10 05:55:21.876061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.974 [2024-12-10 05:55:21.876234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.974 [2024-12-10 05:55:21.876402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.974 [2024-12-10 05:55:21.876410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.974 [2024-12-10 05:55:21.876416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.974 [2024-12-10 05:55:21.876422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.974 [2024-12-10 05:55:21.888360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.974 [2024-12-10 05:55:21.888784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.974 [2024-12-10 05:55:21.888800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.974 [2024-12-10 05:55:21.888807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.974 [2024-12-10 05:55:21.888975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.974 [2024-12-10 05:55:21.889142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.974 [2024-12-10 05:55:21.889150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.974 [2024-12-10 05:55:21.889156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.974 [2024-12-10 05:55:21.889162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.974 5814.40 IOPS, 22.71 MiB/s [2024-12-10T04:55:21.933Z] [2024-12-10 05:55:21.901198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.974 [2024-12-10 05:55:21.901677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.974 [2024-12-10 05:55:21.901721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.974 [2024-12-10 05:55:21.901744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.974 [2024-12-10 05:55:21.902336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.974 [2024-12-10 05:55:21.902873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.974 [2024-12-10 05:55:21.902881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.974 [2024-12-10 05:55:21.902890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.974 [2024-12-10 05:55:21.902897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.974 [2024-12-10 05:55:21.914030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.974 [2024-12-10 05:55:21.914456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.974 [2024-12-10 05:55:21.914473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:03.974 [2024-12-10 05:55:21.914480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:03.974 [2024-12-10 05:55:21.914648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:03.974 [2024-12-10 05:55:21.914815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.974 [2024-12-10 05:55:21.914823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.974 [2024-12-10 05:55:21.914829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.974 [2024-12-10 05:55:21.914835] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.235 [2024-12-10 05:55:21.927063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.235 [2024-12-10 05:55:21.927484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-10 05:55:21.927501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.235 [2024-12-10 05:55:21.927508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.235 [2024-12-10 05:55:21.927674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.235 [2024-12-10 05:55:21.927842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.235 [2024-12-10 05:55:21.927850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.235 [2024-12-10 05:55:21.927856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.235 [2024-12-10 05:55:21.927861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:21.939862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:21.940266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:21.940282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:21.940289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:21.940448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:21.940606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:21.940614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:21.940620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:21.940625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:21.952609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:21.953003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:21.953018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:21.953025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:21.953183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:21.953369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:21.953377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:21.953383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:21.953389] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:21.965424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:21.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:21.965801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:21.965808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:21.965967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:21.966124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:21.966132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:21.966138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:21.966144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:21.978341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:21.978769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:21.978784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:21.978791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:21.978959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:21.979133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:21.979141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:21.979147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:21.979153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:21.991088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:21.991521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:21.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:21.991597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:21.992106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:21.992285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:21.992294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:21.992300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:21.992306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:22.003985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:22.004396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:22.004413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:22.004420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:22.004588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:22.004756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:22.004764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:22.004770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:22.004776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:22.016853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:22.017242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:22.017258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:22.017265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:22.017423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:22.017581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:22.017589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:22.017595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:22.017600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:22.029583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:22.029994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:22.030011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:22.030018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:22.030188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:22.030365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:22.030374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:22.030381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:22.030387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:22.042625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:22.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:22.043071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:22.043079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:22.043258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:22.043431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:22.043440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:22.043446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:22.043452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:22.055508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.236 [2024-12-10 05:55:22.055795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-10 05:55:22.055812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.236 [2024-12-10 05:55:22.055819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.236 [2024-12-10 05:55:22.055986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.236 [2024-12-10 05:55:22.056153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.236 [2024-12-10 05:55:22.056161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.236 [2024-12-10 05:55:22.056167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.236 [2024-12-10 05:55:22.056173] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.236 [2024-12-10 05:55:22.068505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.068933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.068950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.068957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.069125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.069314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.069323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.069332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.069339] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.081372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.081774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.081818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.081842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.082302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.082471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.082479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.082486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.082492] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.094191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.094583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.094600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.094607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.094765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.094924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.094932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.094937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.094943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.106912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.107327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.107344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.107351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.107518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.107686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.107694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.107699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.107705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.119915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.120318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.120336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.120343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.120515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.120687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.120696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.120702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.120708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.132887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.133292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.133309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.133317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.133488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.133662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.133670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.133676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.133683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.145958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.146391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.146408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.146416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.146588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.146761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.146769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.146776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.146782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.158988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.159409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.159430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.159437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.159605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.159773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.159781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.159787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.159794] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.172027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.172431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.172448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.172456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.172624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.172792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.172801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.172807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.172813] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.237 [2024-12-10 05:55:22.185071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.237 [2024-12-10 05:55:22.185455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-10 05:55:22.185499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.237 [2024-12-10 05:55:22.185522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.237 [2024-12-10 05:55:22.186104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.237 [2024-12-10 05:55:22.186660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.237 [2024-12-10 05:55:22.186668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.237 [2024-12-10 05:55:22.186675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.237 [2024-12-10 05:55:22.186681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.198057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.198463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.198480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.198488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.198665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.198838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.198846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.198852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.198859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.211251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.211623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.211667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.211690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.212284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.212485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.212493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.212500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.212506] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.224107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.224503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.224540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.224566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.225146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.225690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.225699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.225705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.225711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.239299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.239745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.239766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.239776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.240030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.240293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.240305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.240319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.240328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.252196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.252598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.252615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.252622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.252789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.252956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.252964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.252970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.252976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.265084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.265421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.265438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.265445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.265612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.265780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.265788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.265794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.265800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.277971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.278367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.278408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.278433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.279013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.279611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.279638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.279659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.279679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.290953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.291297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.291316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.291324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.291497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.291670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.291679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.291686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.291694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.303804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.304228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.304269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.304294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.304841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.305008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.498 [2024-12-10 05:55:22.305016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.498 [2024-12-10 05:55:22.305022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.498 [2024-12-10 05:55:22.305028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.498 [2024-12-10 05:55:22.316603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.498 [2024-12-10 05:55:22.317055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.498 [2024-12-10 05:55:22.317098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.498 [2024-12-10 05:55:22.317121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.498 [2024-12-10 05:55:22.317683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.498 [2024-12-10 05:55:22.317851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.317860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.317866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.317872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.329617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.329959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.330011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.330034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.330632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.330848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.330856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.330862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.330868] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.342587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.343017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.343060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.343083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.343686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.343860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.343868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.343875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.343881] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.355612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.356050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.356094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.356117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.356712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.357131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.357139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.357145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.357151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.368591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.369002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.369018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.369025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.369193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.369393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.369402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.369408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.369414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.381543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.381912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.381928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.381935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.382103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.382277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.382286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.382293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.382299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.394522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.394887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.394903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.394910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.395077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.395250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.395258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.395264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.395271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.407510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.407848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.407893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.407915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.408510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.409097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.409105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.409114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.409120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.420424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.420786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.420803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.420811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.420983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.421156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.421164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.421171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.421177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 [2024-12-10 05:55:22.433429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 [2024-12-10 05:55:22.433760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.499 [2024-12-10 05:55:22.433777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.499 [2024-12-10 05:55:22.433784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.499 [2024-12-10 05:55:22.433952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.499 [2024-12-10 05:55:22.434120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.499 [2024-12-10 05:55:22.434129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.499 [2024-12-10 05:55:22.434135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.499 [2024-12-10 05:55:22.434141] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 300258 Killed "${NVMF_APP[@]}" "$@" 00:30:04.499 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:04.499 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:04.499 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.499 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.499 [2024-12-10 05:55:22.446454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.499 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.499 [2024-12-10 05:55:22.446777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.500 [2024-12-10 05:55:22.446794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.500 [2024-12-10 05:55:22.446801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.500 [2024-12-10 05:55:22.446974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.500 [2024-12-10 05:55:22.447150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.500 [2024-12-10 05:55:22.447159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.500 [2024-12-10 05:55:22.447164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.500 [2024-12-10 05:55:22.447171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=301582 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 301582 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 301582 ']' 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.759 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.759 [2024-12-10 05:55:22.459556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.759 [2024-12-10 05:55:22.459940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-12-10 05:55:22.459957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.759 [2024-12-10 05:55:22.459964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.759 [2024-12-10 05:55:22.460136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.759 [2024-12-10 05:55:22.460316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.759 [2024-12-10 05:55:22.460325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.759 [2024-12-10 05:55:22.460331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.759 [2024-12-10 05:55:22.460337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.759 [2024-12-10 05:55:22.472541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.759 [2024-12-10 05:55:22.472928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-12-10 05:55:22.472943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.759 [2024-12-10 05:55:22.472950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.759 [2024-12-10 05:55:22.473122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.759 [2024-12-10 05:55:22.473303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.759 [2024-12-10 05:55:22.473312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.759 [2024-12-10 05:55:22.473318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.759 [2024-12-10 05:55:22.473327] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.759 [2024-12-10 05:55:22.485555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.759 [2024-12-10 05:55:22.485848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-12-10 05:55:22.485865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.759 [2024-12-10 05:55:22.485872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.759 [2024-12-10 05:55:22.486044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.759 [2024-12-10 05:55:22.486226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.759 [2024-12-10 05:55:22.486236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.759 [2024-12-10 05:55:22.486243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.759 [2024-12-10 05:55:22.486250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.759 [2024-12-10 05:55:22.498475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.759 [2024-12-10 05:55:22.498800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-12-10 05:55:22.498817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.498825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.498993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.499161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.499169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.499175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.499181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.502380] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:30:04.760 [2024-12-10 05:55:22.502419] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.760 [2024-12-10 05:55:22.511538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.511903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.511920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.511928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.512100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.512281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.512290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.512300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.512306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.524593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.524997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.525015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.525022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.525194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.525374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.525382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.525389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.525395] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.537640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.538023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.538040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.538048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.538227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.538401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.538411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.538419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.538426] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.550655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.550978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.550994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.551002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.551174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.551352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.551361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.551368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.551374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.563658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.564040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.564056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.564063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.564242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.564425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.564433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.564440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.564446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.576735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.577119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.577135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.577143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.577336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.577510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.760 [2024-12-10 05:55:22.577519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.760 [2024-12-10 05:55:22.577526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.760 [2024-12-10 05:55:22.577533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.760 [2024-12-10 05:55:22.587445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.760 [2024-12-10 05:55:22.589639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.760 [2024-12-10 05:55:22.590064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-12-10 05:55:22.590082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.760 [2024-12-10 05:55:22.590090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.760 [2024-12-10 05:55:22.590268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.760 [2024-12-10 05:55:22.590444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.590452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.590460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.590478] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.602555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.602966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.602983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.602996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.603163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.603339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.603349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.603354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.603361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.615527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.615946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.615963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.615970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.616138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.616333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.616342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.616349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.616355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.626679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.761 [2024-12-10 05:55:22.626702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.761 [2024-12-10 05:55:22.626709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.761 [2024-12-10 05:55:22.626716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.761 [2024-12-10 05:55:22.626721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.761 [2024-12-10 05:55:22.627963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.761 [2024-12-10 05:55:22.627996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.761 [2024-12-10 05:55:22.627997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.761 [2024-12-10 05:55:22.628595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.629028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.629047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.629055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.629235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.629409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.629418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.629428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.629435] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.641638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.642069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.642089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.642097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.642278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.642453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.642462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.642469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.642475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.654669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.655083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.655104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.655112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.655292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.655467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.655475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.655482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.655489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.667692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.668118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.668138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.668146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.668323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.668499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.668507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.668515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.668521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.680711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.681067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.681087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.681095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.681273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.681447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.681456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.681463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.681469] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.693687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.694099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.694116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.694123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.694300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.694474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.694482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.694489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.694496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.761 [2024-12-10 05:55:22.706743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.761 [2024-12-10 05:55:22.707131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-12-10 05:55:22.707147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:04.761 [2024-12-10 05:55:22.707155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:04.761 [2024-12-10 05:55:22.707332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:04.761 [2024-12-10 05:55:22.707506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.761 [2024-12-10 05:55:22.707514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.761 [2024-12-10 05:55:22.707521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.761 [2024-12-10 05:55:22.707527] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.019 [2024-12-10 05:55:22.719721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.019 [2024-12-10 05:55:22.720123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.019 [2024-12-10 05:55:22.720140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.019 [2024-12-10 05:55:22.720152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.019 [2024-12-10 05:55:22.720328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.019 [2024-12-10 05:55:22.720502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.019 [2024-12-10 05:55:22.720510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.019 [2024-12-10 05:55:22.720516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.019 [2024-12-10 05:55:22.720522] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.019 [2024-12-10 05:55:22.732707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.019 [2024-12-10 05:55:22.733109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.019 [2024-12-10 05:55:22.733126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.019 [2024-12-10 05:55:22.733133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.019 [2024-12-10 05:55:22.733310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.019 [2024-12-10 05:55:22.733483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.019 [2024-12-10 05:55:22.733492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.019 [2024-12-10 05:55:22.733499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.019 [2024-12-10 05:55:22.733505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.019 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.020 [2024-12-10 05:55:22.745705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 [2024-12-10 05:55:22.746090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.746107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.746115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 [2024-12-10 05:55:22.746293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.746469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.746482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.746491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.746500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 [2024-12-10 05:55:22.758694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 [2024-12-10 05:55:22.759049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.759070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.759077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 [2024-12-10 05:55:22.759255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.759430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.759440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.759446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.759454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.020 [2024-12-10 05:55:22.771812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.020 [2024-12-10 05:55:22.772216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.772240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.772247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 [2024-12-10 05:55:22.772419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.772592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.772600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.772607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.772614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 [2024-12-10 05:55:22.776540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.020 [2024-12-10 05:55:22.784822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 [2024-12-10 05:55:22.785239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.785256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.785263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 [2024-12-10 05:55:22.785437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.785610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.785621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.785627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.785634] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 [2024-12-10 05:55:22.797817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 [2024-12-10 05:55:22.798262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.798280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.798287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 [2024-12-10 05:55:22.798459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.798633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.798642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.798651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.798657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 Malloc0 00:30:05.020 [2024-12-10 05:55:22.810865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 [2024-12-10 05:55:22.811234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.811253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.811261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.020 [2024-12-10 05:55:22.811434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.811607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.811616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.811622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.811629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.020 [2024-12-10 05:55:22.823967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 [2024-12-10 05:55:22.824357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.020 [2024-12-10 05:55:22.824374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cdb20 with addr=10.0.0.2, port=4420 00:30:05.020 [2024-12-10 05:55:22.824385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cdb20 is same with the state(6) to be set 00:30:05.020 [2024-12-10 05:55:22.824558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cdb20 (9): Bad file descriptor 00:30:05.020 [2024-12-10 05:55:22.824731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.020 [2024-12-10 05:55:22.824740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.020 [2024-12-10 05:55:22.824746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.020 [2024-12-10 05:55:22.824752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.020 [2024-12-10 05:55:22.834352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.020 [2024-12-10 05:55:22.836941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.020 05:55:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 300519 00:30:05.020 [2024-12-10 05:55:22.861319] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:05.950 4897.67 IOPS, 19.13 MiB/s [2024-12-10T04:55:25.279Z] 5855.29 IOPS, 22.87 MiB/s [2024-12-10T04:55:26.210Z] 6583.75 IOPS, 25.72 MiB/s [2024-12-10T04:55:27.139Z] 7138.00 IOPS, 27.88 MiB/s [2024-12-10T04:55:28.069Z] 7571.20 IOPS, 29.57 MiB/s [2024-12-10T04:55:28.999Z] 7917.82 IOPS, 30.93 MiB/s [2024-12-10T04:55:29.928Z] 8210.75 IOPS, 32.07 MiB/s [2024-12-10T04:55:31.297Z] 8467.00 IOPS, 33.07 MiB/s [2024-12-10T04:55:32.227Z] 8677.50 IOPS, 33.90 MiB/s [2024-12-10T04:55:32.228Z] 8857.87 IOPS, 34.60 MiB/s 00:30:14.269 Latency(us) 00:30:14.269 [2024-12-10T04:55:32.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.269 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:14.269 Verification LBA range: start 0x0 length 0x4000 00:30:14.269 Nvme1n1 : 15.01 8860.97 34.61 11051.99 0.00 6408.25 639.76 19099.06 00:30:14.269 [2024-12-10T04:55:32.228Z] =================================================================================================================== 00:30:14.269 [2024-12-10T04:55:32.228Z] Total : 8860.97 34.61 11051.99 0.00 6408.25 639.76 19099.06 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.269 rmmod nvme_tcp 00:30:14.269 rmmod nvme_fabrics 00:30:14.269 rmmod nvme_keyring 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 301582 ']' 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 301582 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 301582 ']' 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 301582 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301582 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301582' 00:30:14.269 killing process with pid 301582 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 301582 00:30:14.269 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 301582 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.528 05:55:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.066 00:30:17.066 real 0m27.018s 00:30:17.066 user 1m1.080s 00:30:17.066 sys 0m7.385s 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.066 ************************************ 00:30:17.066 END TEST nvmf_bdevperf 00:30:17.066 ************************************ 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.066 05:55:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.066 ************************************ 00:30:17.066 START TEST nvmf_target_disconnect 00:30:17.066 ************************************ 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:17.067 * Looking for test storage... 00:30:17.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.067 --rc genhtml_branch_coverage=1 00:30:17.067 --rc genhtml_function_coverage=1 00:30:17.067 --rc genhtml_legend=1 00:30:17.067 --rc geninfo_all_blocks=1 00:30:17.067 --rc geninfo_unexecuted_blocks=1 00:30:17.067 00:30:17.067 ' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.067 --rc genhtml_branch_coverage=1 00:30:17.067 --rc genhtml_function_coverage=1 00:30:17.067 --rc genhtml_legend=1 00:30:17.067 --rc geninfo_all_blocks=1 00:30:17.067 --rc geninfo_unexecuted_blocks=1 00:30:17.067 00:30:17.067 ' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.067 --rc genhtml_branch_coverage=1 00:30:17.067 --rc genhtml_function_coverage=1 00:30:17.067 --rc genhtml_legend=1 00:30:17.067 --rc geninfo_all_blocks=1 00:30:17.067 --rc geninfo_unexecuted_blocks=1 00:30:17.067 00:30:17.067 ' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.067 --rc genhtml_branch_coverage=1 00:30:17.067 --rc genhtml_function_coverage=1 00:30:17.067 --rc genhtml_legend=1 00:30:17.067 --rc geninfo_all_blocks=1 00:30:17.067 --rc geninfo_unexecuted_blocks=1 00:30:17.067 00:30:17.067 ' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.067 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.068 05:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:23.648 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:23.649 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:23.649 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:23.649 Found net devices under 0000:af:00.0: cvl_0_0 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:23.649 Found net devices under 0000:af:00.1: cvl_0_1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:30:23.649 00:30:23.649 --- 10.0.0.2 ping statistics --- 00:30:23.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.649 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:30:23.649 00:30:23.649 --- 10.0.0.1 ping statistics --- 00:30:23.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.649 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.649 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:23.649 ************************************ 00:30:23.649 START TEST nvmf_target_disconnect_tc1 00:30:23.650 ************************************ 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:23.650 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.908 [2024-12-10 05:55:41.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.909 [2024-12-10 05:55:41.621628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e410 with addr=10.0.0.2, port=4420 00:30:23.909 [2024-12-10 05:55:41.621649] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:23.909 [2024-12-10 05:55:41.621661] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:23.909 [2024-12-10 05:55:41.621670] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:23.909 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:23.909 Initializing NVMe Controllers 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:23.909 00:30:23.909 real 0m0.125s 00:30:23.909 user 0m0.053s 00:30:23.909 sys 0m0.071s 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:23.909 ************************************ 00:30:23.909 END TEST nvmf_target_disconnect_tc1 00:30:23.909 ************************************ 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:23.909 ************************************ 00:30:23.909 START TEST nvmf_target_disconnect_tc2 00:30:23.909 ************************************ 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=307075 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 307075 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 307075 ']' 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.909 05:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.909 [2024-12-10 05:55:41.764886] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:30:23.909 [2024-12-10 05:55:41.764930] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.909 [2024-12-10 05:55:41.847593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.167 [2024-12-10 05:55:41.887548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.167 [2024-12-10 05:55:41.887583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.167 [2024-12-10 05:55:41.887590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.167 [2024-12-10 05:55:41.887596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.167 [2024-12-10 05:55:41.887600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.167 [2024-12-10 05:55:41.889162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:24.167 [2024-12-10 05:55:41.889268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:24.168 [2024-12-10 05:55:41.889355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:24.168 [2024-12-10 05:55:41.889355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:24.734 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.734 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:24.734 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.734 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.735 Malloc0 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.735 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.993 [2024-12-10 05:55:42.690251] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.993 [2024-12-10 05:55:42.715304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=307320 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:24.993 05:55:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.898 05:55:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 307075 00:30:26.898 05:55:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 [2024-12-10 05:55:44.744115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 [2024-12-10 05:55:44.744315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Read completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.898 starting I/O failed 00:30:26.898 Write completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Write completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Write completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Write completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Write completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 Read completed with error (sct=0, sc=8) 00:30:26.899 starting I/O failed 00:30:26.899 [2024-12-10 05:55:44.744516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.899 [2024-12-10 05:55:44.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.744652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.744753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.744763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.744979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.745010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.745192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.745236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.745361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.745392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.745580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.745611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.745749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.745782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.746027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.746060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.746185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.746226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.746425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.746457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.746596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.746628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.746802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.746825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.746937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.746971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.747970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.747980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.748065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.748151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.748348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.748505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.748722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.748881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.748983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.749152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.749327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.749418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.749564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.749716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.749927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.749958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.899 [2024-12-10 05:55:44.750069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-12-10 05:55:44.750102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.899 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.750216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.750269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.750334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.750345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.750553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.750586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.750762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.750794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.750933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.750964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.751076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.751108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.751290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.751325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.751577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.751610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.751789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.751820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.752015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.752048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.752244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.752277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.752406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.752438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.752623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.752655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.752790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.752821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.752954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.752987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.753250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.753284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.753400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.753433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.753603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.753640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.753758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.753791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.753965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.753998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.754207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.754249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.754419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.754451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.754626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.754659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.754871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.754903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.755032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.755064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.755329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.755363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.755530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.755562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.755826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.755859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.755964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.755996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.756210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.756253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.756504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.756537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.756699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.756868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.756901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.757065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.757097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.757289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.757322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.757579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.757611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.757786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.757818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.757940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.757974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.900 [2024-12-10 05:55:44.758143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-12-10 05:55:44.758176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.900 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.758372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.758405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.758587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.758620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.758804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.758836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.759073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.759105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.759205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.759261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.759380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.759412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.759589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.759621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.759805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.759838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.760011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.760044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.760175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.760207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.760358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.760390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.760499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.760532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.760730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.760763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.760884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.760917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.761025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.761057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.761296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.761330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.761516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.761549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.761657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.761689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.761810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.761849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.762039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.762071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.762195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.762343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.762377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.762636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.762669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.762790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.762823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.762988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.763020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.763141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.763173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.763308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.763341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.763464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.763496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.763751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.763785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.763978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.764010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.764185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.764249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.764384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.764597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.764629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.764808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.764841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.764962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.765116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.765148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.765404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.765438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.765568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.765600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.901 qpair failed and we were unable to recover it. 00:30:26.901 [2024-12-10 05:55:44.765778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-12-10 05:55:44.765810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.765993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.766025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.766145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.766178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.766449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.766483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.766680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.766712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.766818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.766850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.767044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.767076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.767271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.767305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.767488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.767520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.767630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.767663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.767847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.767879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.768049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.768081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.768205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.768246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.768432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.768465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.768664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.768696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.768808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.768840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.769032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.769065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.769305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.769513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.769713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.769746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.769988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.770026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.770196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.770505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.770538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.770659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.770692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.770875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.770908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.771114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.771147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.771335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.771368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.771537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.771569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.771778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.771811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.771923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.771955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.772146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.772178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.772388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.902 [2024-12-10 05:55:44.772422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.902 qpair failed and we were unable to recover it. 00:30:26.902 [2024-12-10 05:55:44.772604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.772637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.772766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.772799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.772907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.772940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.773061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.773094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.773272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.773307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.773437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.773469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.773647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.773679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.773850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.773884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.774052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.774084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.774269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.774303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.774494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.774528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.774648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.774680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.774849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.774882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.775056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.775088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.775258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.775291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.775419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.775452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.775626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.775660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.775783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.775815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.775986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.776019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.776261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.776294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.776554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.776586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.776688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.776721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.776823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.776855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.777074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.777243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.777277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.777468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.777501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.777627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.777660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.777840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.777873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.778002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.778041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.778164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.778196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.778406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.778507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.778540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.778658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.778691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.778860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.778892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.779027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.779059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.779161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.779193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.779302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.779335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.779438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.779471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.779708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.903 [2024-12-10 05:55:44.779741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.903 qpair failed and we were unable to recover it. 00:30:26.903 [2024-12-10 05:55:44.779857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.779889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.780066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.780099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.780209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.780251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.780493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.780526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.780702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.780734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.780970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.781003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.781111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.781144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.781311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.781345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.781459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.781491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.781670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.781885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.781917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.782100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.782132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.782305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.782339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.782449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.782481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.782662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.782694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.782936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.782969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.783095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.783128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.783339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.783372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.783492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.783524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.783660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.783692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.783807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.783840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.784020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.784053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.784266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.784300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.784490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.784524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.784780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.784813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.785073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.785106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.785283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.785318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.785519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.785822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.785856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.786100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.786138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.786278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.786313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.786529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.786562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.786669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.786701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.786834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.786867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.787057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.787090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.787282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.787316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.787513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.787546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.787950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.904 [2024-12-10 05:55:44.787983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.904 qpair failed and we were unable to recover it. 00:30:26.904 [2024-12-10 05:55:44.788094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.788126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.788322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.788357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.788485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.788517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.788691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.788724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.788864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.788896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.789000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.789032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.789229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.789269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.789455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.789488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.789724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.789757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.789929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.789961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.790143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.790176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.790452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.790486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.790722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.790755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.790876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.791092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.791124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.791241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.791275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.791460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.791492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.791680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.791713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.791829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.791861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.791969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.792001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.792178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.792211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.792422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.792454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.792630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.792661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.792835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.792868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.793119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.793151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.793265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.793298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.793509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.793543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.793781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.793813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.794000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.794032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.794286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.794325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.794520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.794553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.794669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.794702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.794832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.794866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.795061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.795093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.795335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.795369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.795558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.795591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.795713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.795745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.905 [2024-12-10 05:55:44.795867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.905 [2024-12-10 05:55:44.795900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.905 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.796081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.796113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.796237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.796271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.796558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.796591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.796694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.796727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.796901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.796934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.797059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.797090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.797266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.797298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.797561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.797594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.797716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.797748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.798052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.798084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.798323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.798356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.798528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.798561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.798690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.798723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.798962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.798995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.799193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.799237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.799342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.799610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.799642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.799752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.799786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.800006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.800184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.800246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.800429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.800462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.800698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.800730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.800838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.800870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.800971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.801003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.801129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.801162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.801293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.801327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.801509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.801541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.801800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.801832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.801968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.802000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.802105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.802138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.802259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.802293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.802552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.802590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.802703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.802735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.802923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.802955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.803071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.803104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.803226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.803260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.803444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.803477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.803609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.803641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.906 [2024-12-10 05:55:44.803759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.906 [2024-12-10 05:55:44.803792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.906 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.804025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.804057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.804243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.804277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.804495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.804528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.804713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.804745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.804876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.804909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.805030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.805063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.805241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.805275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.805387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.805419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.805596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.805810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.805843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.806012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.806046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.806231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.806266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.806382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.806413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.806528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.806561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.806675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.806708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.806811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.806843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.807073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.807105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.807282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.807316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.807436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.807469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.807696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.807771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.807981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.808018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.808198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.808464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.808498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.808753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.808785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.808957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.808990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.809172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.809205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.809425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.809458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.809651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.809683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.809888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.809921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.810110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.810142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.810276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.810310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.810516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.810550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.810675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.810708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.810834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.907 [2024-12-10 05:55:44.810866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.907 qpair failed and we were unable to recover it. 00:30:26.907 [2024-12-10 05:55:44.811038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.811070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.811201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.811481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.811514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.811779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.811811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.811923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.811956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.812139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.812171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.812309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.812342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.812539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.812572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.812757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.812790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.813001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.813146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.813179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.813430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.813464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.813595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.813633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.813758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.813791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.814074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.814293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.814327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.814512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.814545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.814729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.814762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.814875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.814909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.815094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.815128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.815317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.815352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.815528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.815561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.815797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.815831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.815962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.815994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.816140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.816277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.816342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.816545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.816577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.816757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.816790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.816906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.816939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.817114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.817148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.817339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.817373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.817485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.817518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.817640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.817674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.817860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.817892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.818010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.818043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.818330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.818364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.818490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.818522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.818634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.818668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.818855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.818888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.908 qpair failed and we were unable to recover it. 00:30:26.908 [2024-12-10 05:55:44.819171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.908 [2024-12-10 05:55:44.819206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.819406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.819438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.819554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.819588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.819718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.819752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.819937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.819969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.820151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.820186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.820499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.820538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.820666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.820699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.820912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.820945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.821206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.821249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.821446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.821478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.821651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.821684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.821806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.821839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.821956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.821994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.822187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.822227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.822473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.822506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.822682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.822714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.822900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.822932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.823214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.823259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.823465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.823498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.823668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.823700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.823880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.823912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.824041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.824072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.824260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.824294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.824477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.824510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.824749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.824781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.824965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.824998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.825206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.825248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.825425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.825458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.825715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.825747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.825882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.825915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.826093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.826124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.826300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.826332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.826606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.826639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.826922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.826953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.827148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.827180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.827368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.827401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.827588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.827620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.827793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.827826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.909 [2024-12-10 05:55:44.828019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.909 [2024-12-10 05:55:44.828052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.909 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.828162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.828200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.828416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.828450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.828569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.828602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.828783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.828814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.829065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.829097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.829207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.829252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.829421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.829455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.829638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.829670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.829772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.829804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.829970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.830003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.830105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.830137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.830331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.830365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.830482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.830515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.830697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.830730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.830942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.830974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.831156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.831189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.831317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.831350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.831537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.831569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.831746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.831779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.832067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.832099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.832267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.832301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.832435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.832467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.832660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.832692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.832900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.832932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.833113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.833146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.833391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.833426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.833613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.833646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.833901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.833933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.834198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.834244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.834429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.834462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.834588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.834621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.834862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.834896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.835048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.835080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.835336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.835370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.835558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.835591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.835705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.835737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.836020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.836052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.836265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.836299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.836425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.836458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.910 qpair failed and we were unable to recover it. 00:30:26.910 [2024-12-10 05:55:44.836695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.910 [2024-12-10 05:55:44.836728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.836911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.836944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.837085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.837119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.837290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.837325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.837448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.837480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.837662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.837695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.837871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.837904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.838093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.838127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.838336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.838371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.838570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.838603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.838710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.838742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.838979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.839011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.839193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.839235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.839350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.839383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.839598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.839630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.839898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.840162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.840196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.840411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.840444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.840579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.840610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.840796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.840828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.841020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.841053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.841171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.841204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.841320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.841353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.841542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.841574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.841703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.841735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.841915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.841947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.842119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.842151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.842325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.842359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.842564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.842596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.842710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.842748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.842860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.842892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:26.911 qpair failed and we were unable to recover it. 00:30:26.911 [2024-12-10 05:55:44.843026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.911 [2024-12-10 05:55:44.843058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.843235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.843269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.843558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.843591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.843777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.843809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.843994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.844026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.844145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.844178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.844460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.844495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.844723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.844755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.844924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.844957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.845141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.845173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.845310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.845343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.845537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.845569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.845827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.845860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.846032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.846176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.846208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.846410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.846443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.846638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.846669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.846851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.846884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.847143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.847177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.847310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.847342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.847476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.847507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.847638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.847670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.847861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.847893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.848040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.848163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.848196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.848422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.848462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.848718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.848749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.848928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.848961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.849144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.849368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.849402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.849578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.849610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.849791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.849825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.849948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.849981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.850090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.850122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.850255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.850290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.850390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.850422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.850595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.850628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.850838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.850871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.851056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-12-10 05:55:44.851088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-12-10 05:55:44.851323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.851358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.851597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.851630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.851753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.851786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.852001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.852033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.852203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.852441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.852473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.852603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.852636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.852809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.852841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.852973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.853006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.853236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.853270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.853388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.853421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.853553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.853586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.853848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.853881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.854001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.854039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.854231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.854265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.854443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.854477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.854599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.854632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.854814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.854847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.855106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.855139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.855334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.855368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.855550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.855582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.855830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.855864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.855982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.856014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.856200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.856263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.856455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.856487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.856747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.856779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.856981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.857014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.857284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.857357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.857650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.857687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.857937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.857970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.858148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.858181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.858369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.858403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.858663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.858696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.858893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.858926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.859115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.859148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.859323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.859356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.859594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.859626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.859838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.859870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.860042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-12-10 05:55:44.860074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-12-10 05:55:44.860186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.860239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.860380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.860423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.860605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.860638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.860875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.860908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.861026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.861058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.861175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.861208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.861347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.861381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.861619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.861652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.861913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.861946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.862125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.862158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.862298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.862333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.862532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.862565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.862769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.862802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.862922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.862955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.863215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.863259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.863452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.863485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.863657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.863689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.863878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.863910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.864084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.864117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.864245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.864279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.864412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.864444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.864706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.864738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.864842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.864875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.865119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.865152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.865361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.865397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.865569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.865601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.865714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.865747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.866021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.866271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.866306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.866497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.866530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.866739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.866771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.866946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.866979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.867170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.867202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.867395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.867428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.867618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.867650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.867823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.867855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.868039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.868071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.868261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.868296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.868410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.868441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-12-10 05:55:44.868632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-12-10 05:55:44.868663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.868907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.868939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.869133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.869172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.869303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.869336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.869469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.869501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.869685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.869718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.869829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.869861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.870034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.870066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.870330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.870364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.870627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.870659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.870850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.870882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.871094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.871127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.871348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.871382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.871646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.871678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.871866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.871898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.872090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.872122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.872335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.872371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.872566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.872598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.872719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.872751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.872877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.872910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.873037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.873069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.873259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.873293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.873446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.873556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.873589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.873845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.873878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.873994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.874027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.874266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.874300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.874482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.874623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.874654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.874777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.874810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.874994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.875028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.875263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.875298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.875410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.875442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.875628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.875661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.875780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.875811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.875945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.875978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.876322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.876355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.876482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.876513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-12-10 05:55:44.876638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-12-10 05:55:44.876672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.876946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.877125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.877157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.877395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.877533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.877565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.877821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.877853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.877979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.878010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.878116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.878151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.878343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.878377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.878563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.878595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.878777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.878809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.878996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.879029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.879288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.879321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.879478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.879510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.879702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.879734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.880003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.880037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.880279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.880313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.880446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.880479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.880612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.880644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.880825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.880857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.881027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.881059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.881264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.881425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.881456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.881652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.881683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.881955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.881986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.882157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.882191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.882404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.882436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.882621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.882653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.882771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.882804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.882990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.883021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.883192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.883236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.883421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.883453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.883620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.883653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-12-10 05:55:44.883834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-12-10 05:55:44.883867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.883991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.884022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.884193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.884257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.884378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.884410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.884581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.884612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.884806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.884838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.885078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.885109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.885291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.885324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.885448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.885482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.885672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.885705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.885872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.885911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.886101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.886132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.886373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.886405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.886524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.886554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.886779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.886813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.886995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.887028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.887210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.887252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.887426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.887459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.887650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.887683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.887920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.887953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.888144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.888177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.888450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.888484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.888678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.888711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.888825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.888858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.889098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.889253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.889285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.889454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.889485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.889745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.889778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.890042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.890074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.890256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.890289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.890478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.890510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.890774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.890807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.890942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.890975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.891146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.891178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.891364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.891397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.891590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.891621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.891733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.891766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.892010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.892082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.892294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.892332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-12-10 05:55:44.892530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-12-10 05:55:44.892565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.892700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.892733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.892865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.892898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.893137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.893170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.893301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.893528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.893561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.893740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.893772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.893901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.893934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.894117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.894149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.894259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.894294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.894395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.894428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.894609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.894651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.894917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.894950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.895134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.895168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.895364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.895398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.895506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.895538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.895719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.895752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.895966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.895998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.896115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.896147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.896361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.896395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.896607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.896640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.896828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.896861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.896995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.897028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.897159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.897192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.897317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.897351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.897547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.897579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.897752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.897787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.897921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.897953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.898143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.898175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.898318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.898353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.898467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.898498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.898774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.898806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.899006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.899038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.899214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.899255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.899522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.899554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.899808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.899840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.899957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.899990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.900176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.900209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.900347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.900384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-12-10 05:55:44.900559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-12-10 05:55:44.900592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.900695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.900727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.900858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.900892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.901078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.901111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.901293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.901328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.901449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.901481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.901656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.901689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.901806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.901840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.902008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.902040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.902230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.902264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.902390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.902423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.902661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.902693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.902874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.902906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.903050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.903084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.903280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.903313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.903549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.903581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.903766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.903800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.904043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.904076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.904274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.904308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.904441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.904473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.904652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.904685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.904923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.904956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.905202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.905245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.905427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.905460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.905596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.905629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.905829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.905861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.905987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.906020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.906209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.906252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.906426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.906458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.906573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.906734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.906767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.906941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.906973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.907177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.907210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.907473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.907509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.907632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.907665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.907903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-12-10 05:55:44.907936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-12-10 05:55:44.908048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.908080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.908262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.908296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.908508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.908541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.908736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.908775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.908904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.908936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.909174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.909208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.909408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.909442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.909684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.909717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.909959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.909992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.910174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.910207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.910341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.910375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.910549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.910582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.910768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.910800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.910972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.911005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.911177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.911208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.911345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.911378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.911495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.911529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.911735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.911767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.911901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.911934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.912124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.912156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.912335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.912370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.912476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.912509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.912700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.912733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.912836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.912867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.913107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.913140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.913264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.913299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.913490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.913522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.913696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.913729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.913838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.913872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.914047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.914079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.914265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.914299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.914482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.914515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.914701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.914733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.914921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.914954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.915124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.915157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.915335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-12-10 05:55:44.915369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-12-10 05:55:44.915552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.915585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.915753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.915786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.916024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.916057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.916299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.916332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.916500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.916672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.916704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.916908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.916941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.917088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.917287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.917320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.917587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.917620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.917818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.917850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.917959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.917991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.918259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.918293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.918409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.918653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.918687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.918858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.918891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.919085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.919118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.919319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.919353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.919543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.919575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.919698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.919731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.919937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.919971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.920103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.920136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.920420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.920455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.920571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.920603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.920809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.920841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.920952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.920985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.921165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.921197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.921393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.921426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.921612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.921646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.921846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.922020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.922052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.922170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.922202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.922404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.922438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.922563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.922596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.922845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.922878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.922999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.923032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-12-10 05:55:44.923148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-12-10 05:55:44.923181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.923369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.923403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.923622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.923654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.923889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.923922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.924187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.924229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.924415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.924447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.924695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.924728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.924963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.924996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.925168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.925201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.925339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.925372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.925646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.925679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.925858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.925897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.926075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.926108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.926300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.926333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.926518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.926551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.926792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.926825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.926942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.926975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.927159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.927191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.927439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.927597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.927629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.927749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.927782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.927963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.927995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.928174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.928205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.928404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.928437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.928568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.928600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.928780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.928813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.928997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.929030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.929208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.929249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.929370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.929402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.929582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.929614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.929787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.929822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.929998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.930029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.930165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.930197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.930420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.930453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.930634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.930794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.930826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.931252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.931285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.931533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.931566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-12-10 05:55:44.931751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-12-10 05:55:44.931783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.931959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.931992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.932112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.932144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.932320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.932354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.932526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.932558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.932797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.932830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.933067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.933101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.933226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.933259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.933417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.933676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.933710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.933840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.933873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.934095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.934215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.934263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.934503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.934536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.934650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.934684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.934874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.934906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.935081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.935113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.935284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.935318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.935492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.935525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.935791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.935823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.935947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.935980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.936164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.936197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.936393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.936427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.936667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.936700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.936904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.936937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.937199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.937240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.937487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.937520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.937700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.937733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.937988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.938020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.938192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.938242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.938374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.938408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.938644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.938678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.938866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.938900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.939013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.939044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.939233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.939268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.939456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.939489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.939677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.939710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.939830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.939863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.940033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.940064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-12-10 05:55:44.940199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-12-10 05:55:44.940243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.940438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.940471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.940717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.940750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.940866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.940898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.941083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.941116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.941239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.941274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.941459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.941493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.941636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.941669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.941801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.941834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.942019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.942052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.942173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.942204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.942478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.942511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.942629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.942662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.942789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.942827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.943030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.943063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.943251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.943284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.943470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.943503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.943686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.943719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.943957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.943989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.944164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.944197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.944336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.944371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.944486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.944517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.944754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.944788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.944914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.944947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.945070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.945103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.945284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.945318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.945579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.945611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.945806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.945839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.946009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.946042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.946299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.946333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.946450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.946483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.946599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.946631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.946816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.946849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.946983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.947016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.947253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.947468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.947500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.947680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.947714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.947845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.947879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.948061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.948094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-12-10 05:55:44.948263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-12-10 05:55:44.948296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.948405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.948438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.948561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.948594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.948780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.948813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.949012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.949044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.949167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.949200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.949450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.949484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.949622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.949654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.949828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.949861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.950049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.950081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.950284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.950318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.950509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.950542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.950649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.950680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.950889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.950922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.951129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.951167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.951369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.951403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.951519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.951552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.951698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.951879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.951912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.952099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.952133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.952319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.952353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.952531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.952564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.952681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.952715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.952829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.952863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.952968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.953000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.953247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.953280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.953464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.953496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.953681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.953713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.953886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.953920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.954114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.954147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.954332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.954368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.954541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.954574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.954813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.954846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.954960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.954994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-12-10 05:55:44.955111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-12-10 05:55:44.955145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.955386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.955421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.955659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.955691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.955807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.955840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.955981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.956014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.956255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.956287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.956416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.956448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.956652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.956685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.956866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.956899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.957084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.957118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.957235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.957271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.957463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.957495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.957747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.957780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.957907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.958133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.958166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.958302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.958335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.958526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.958559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.958763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.958796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.958912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.958944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.959124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.959157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.959423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.959468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.959603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.959636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.959825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.959858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.960046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.960079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.960188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.960245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.960378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.960410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.960617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.960650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.960831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.960865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.961042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.961075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.961211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.961254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.961425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.961458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.961643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.961676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.961938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.961970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.962096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.962129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.962329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.962365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.962480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.962513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.962752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.962786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.962971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.963004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-12-10 05:55:44.963117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-12-10 05:55:44.963150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.963324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.963358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.963531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.963563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.963773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.963806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.963922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.963956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.964156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.964189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.964317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.964351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.964524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.964557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.964672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.964705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.964872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.964944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.965163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.965200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.965487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.965520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.965789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.965822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.966064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.966097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.966283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.966317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.966558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.966591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.966773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.966806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.966987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.967020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.967151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.967184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.967365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.967398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.967660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.967692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.967874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.967907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.968121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.968153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.968370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.968403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.968524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.968557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.968665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.968698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.968936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.968967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.969074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.969107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.969358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.969391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.969512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.969545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.969808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.969842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.970012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.970046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.970235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.970270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.970401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.970434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.970627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.970660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.970870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.970901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.971010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.971047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.971254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.971288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.971545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.971577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.971767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-12-10 05:55:44.971799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-12-10 05:55:44.971909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.971942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.972197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.972239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.972413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.972446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.972561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.972594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.972786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.972819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.973080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.973113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.973357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.973390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.973562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.973594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.973777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.973809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.973984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.974016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.974191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.974237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.974436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.974469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.974658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.974690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.974872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.974906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.975022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.975055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.975312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.975345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.975526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.975557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.975747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.975780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.975954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.975986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.976158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.976443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.976477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.976765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.976797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.976972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.977004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.977138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.977170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.977380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.977413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.977515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.977547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.977677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.977709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.977974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.978007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.978124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.978156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.978407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.978441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.978563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.978595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.978785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.978816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.979007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.979039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.979297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.979332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.979522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.979555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.979674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.979706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.979891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.979928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.980040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.980072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.980188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-12-10 05:55:44.980227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-12-10 05:55:44.980339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.980371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.980488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.980520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.980696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.980729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.980968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.981000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.981242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.981275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.981470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.981502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.981698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.981730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.982003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.982035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.982244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.982277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.982407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.982439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.982632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.982665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.982792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.982824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.983010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.983042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.983215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.983259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.983441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.983473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.983655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.983689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.983878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.983911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.984170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.984202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.984346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.984378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.984617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.984650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.984786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.984819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.985098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.985131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.985340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.985373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.985603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.985634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.985835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.985868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.986072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.986105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.986210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.986251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.986425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.986457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.986726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.986759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.986934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.986968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.987069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.987101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.987287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.987320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.987426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.987458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.987629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-12-10 05:55:44.987661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-12-10 05:55:44.987923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.987955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.988051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.988084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.988265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.988299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.988470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.988506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.988700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.988732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.988883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.988917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.989177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.989209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.989476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.989510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.989701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.989734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.989912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.989945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.990054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.990087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.990322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.990356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.990538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.990570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.990809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.990842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.990975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.991007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.991142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.991175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.991352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.991386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.991577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.991611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.991797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.991831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.991947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.991979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.992151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.992184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.992362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.992397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.992594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.992626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.992888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.992921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.993041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.993075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.993264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.993297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.993410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.993443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.993683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.993716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.993890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.993922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.994108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.994141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.994274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.994309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.994445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.994475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.994654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.994686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.994922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.995067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.995101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.995275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.995308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.995432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.995465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.995624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.995799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-12-10 05:55:44.995832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-12-10 05:55:44.996004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.996037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.996143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.996175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.996381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.996415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.996537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.996570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.996749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.996788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.996970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.997005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.997186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.997228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.997466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.997500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.997681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.997714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.997906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.997939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.998109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.998141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.998268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.998304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.998570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.998604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.998738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.998771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.998877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.998910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.999148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.999182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.999375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.999408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.999528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.999561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.999768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:44.999802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:44.999993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.000027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.000225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.000260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.000432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.000466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.000636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.000669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.000794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.000829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.001010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.001043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.001342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.001376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.001565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.001600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.001722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.001754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.002029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.002063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.002261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.002294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.002551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.002584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.002782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.002816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.002987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.003021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.003194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.003237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.003425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.003457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.003650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.003684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.003804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.003836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.004012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.004045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.004185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-12-10 05:55:45.004227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-12-10 05:55:45.004415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.004448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.004644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.004677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.004859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.004894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.005026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.005060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.005175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.005208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.005505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.005543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.005818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.005850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.006050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.006084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.006211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.006274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.006510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.006542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.006667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.006700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.006896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.006930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.007041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.007074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.007188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.007232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.007423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.007456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.007635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.007667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.007911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.007945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.008078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.008111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.008302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.008335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.008453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.008486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.008683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.008717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.008851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.008883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.009003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.009037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.009258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.009370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.009402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.009586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.009621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.009801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.009834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.010047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.010081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.010342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.010376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.010562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.010596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.010700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.010732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.010863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.010896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.011093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.011126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.011314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.011349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.011549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.011583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.011783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.011815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.011937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.011973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.012086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.012118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-12-10 05:55:45.012313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-12-10 05:55:45.012346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.012535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.012568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.012741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.012774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.012880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.012913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.013085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.013118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.013314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.013350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.013474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.013507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.013698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.013736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.013917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.013951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.014061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.014095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.014293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.014327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.014525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.014729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.014762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.014874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.014909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.015027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.015061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.015249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.015282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.015420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.015613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.015646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.015831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.015864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.015976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.016010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.016125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.016158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.016277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.016313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.016501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.016535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.016651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.016685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.016856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.016889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.017107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.017141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.017274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.017307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.017430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.017464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.017651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.017685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.017854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.017887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.017999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.018032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.018204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.018267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.018439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.018471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.018588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.018622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.018758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.018791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.018967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.019000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.019167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.019202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.019393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.019426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.019552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.019585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-12-10 05:55:45.019706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-12-10 05:55:45.019738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.019940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.019972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.020212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.020254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.020444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.020478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.020597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.020631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.020734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.020767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.020949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.020983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.021237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.021352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.021390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.021571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.021605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.021797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.021832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.022018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.022051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.022168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.022203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.022347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.022380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.022509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.022542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.022710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.022894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.022927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.023134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.023168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.023405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.023439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.023566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.023601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.023732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.023765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.023884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.023919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.024059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.024094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.024210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.024255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.024397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.024432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.024698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.024732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.024861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.024895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.025095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.025129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.025248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.025283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.025501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.025535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.025713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.025746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.025944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.025978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.026157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.026190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.026412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.026446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.026566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-12-10 05:55:45.026600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-12-10 05:55:45.026719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.026751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.026955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.026988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.027178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.027212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.027421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.027455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.027738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.027770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.027872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.027906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.028092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.028126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.028299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.028334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.028544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.028576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.028716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.028750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.028957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.028989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.029123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.029157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.029287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.029324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.029520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.029559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.029675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.029708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.029948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.029984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.030157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.030190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.030494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.030529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.030638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.030672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.030797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.030831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.031049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.031272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.031423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.031578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.031732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.031874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.031994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.032028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.032210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.032252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.032493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.032527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.032640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.032673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.032939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.032972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.033093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.033126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.033392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.033427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.033607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.033640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.033835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.033869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.033997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.034031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.034299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.034334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.034465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.034498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.034673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.034708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-12-10 05:55:45.034816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-12-10 05:55:45.034849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.034958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.034991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.035229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.035404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.035438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.035628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.035661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.035798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.035832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.036013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.036047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.036181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.036213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.036404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.036438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.036612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.036646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.036890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.036922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.037177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.037211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.037409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.037443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.037667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.037701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.037807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.037846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.037954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.037987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.038237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.038273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.038402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.038437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.038684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.038717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.038904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.038936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.039165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.039198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.039340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.039374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.039495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.039529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.039700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.039733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.039905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.039939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.040052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.040087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.040231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.040266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.040396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.040431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.040618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.040652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.040843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.040878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.041067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.041101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.041238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.041272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.041392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.041426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.041533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.041566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.041856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.041890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.042170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.042204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.042468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.042502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.042684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.042716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.042902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-12-10 05:55:45.042938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-12-10 05:55:45.043067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.043100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.043209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.043256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.043451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.043485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.043656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.043690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.043809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.043843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.044020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.044053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.044240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.044275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.044445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.044550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.044584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.044756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.044789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.044973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.045006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.045116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.045149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.045392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.045426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.045628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.045661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.045840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.045874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.045999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.046038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.046293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.046329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.046576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.046610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.046809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.046841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.046972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.047004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.047118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.047152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.047277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.047314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.047488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.047521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.047640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.047674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.047857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.047890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.048083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.048118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.048303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.048338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.048474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.048507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.048613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.048646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.048785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.048819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.049061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.049094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.049341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.049376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.049562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.049595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.049777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.049809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.049996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.050030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.050206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.050265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.050529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.050562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.050687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.050720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.050832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-12-10 05:55:45.050866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-12-10 05:55:45.051146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.051179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.051369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.051404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.051522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.051555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.051733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.051766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.051960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.051992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.052171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.052204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.052330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.052363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.052483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.052517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.052778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.052811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.052931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.052963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.053137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.053170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.053301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.053337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.053508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.053541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.053674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.053707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.053888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.053921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.054063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.054095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.054272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.054312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.054432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.054465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.054671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.054704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.054814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.055082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.055115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.055237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.055446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.055479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.055597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.055631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.055880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.055914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.056132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.056165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.056284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.056319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.056530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.056564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.056683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.056715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.056898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.056930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.057067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.057100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.057346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.057381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.057583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.057794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-12-10 05:55:45.057827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-12-10 05:55:45.058069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.058104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.058316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.058349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.058463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.058495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.058738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.058771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.058946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.058980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.059095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.059312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.059347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.059573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.059605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.059792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.059825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.059943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.059977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.060151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.060183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.060379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.060412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.060656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.060689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.060946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.060979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.061113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.061146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.061338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.061371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.061506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.061539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.061694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.061727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.061843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.061875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.061994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.062027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.062143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.062176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.062368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.062402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.062528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.062567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.062676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.062707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.062815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.062850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.063052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.063085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.063230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.063265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.063525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.063560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.063697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.063731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.063858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.063891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.064105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.064138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.064263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.064298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.064474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.064507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.064684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.064717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.064820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.064853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.065031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.065064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.065255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.065291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.065486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-12-10 05:55:45.065519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-12-10 05:55:45.065644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.065677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.065860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.065894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.066128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.066161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.066345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.066378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.066503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.066537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.066730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.066763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.067026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.067059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.067302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.067336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.067518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.067552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.067670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.067703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.067822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.067855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.068046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.068081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.068272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.068308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.068421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.068454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.068637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.068671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.068805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.068838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.069039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.069073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.069201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.069241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.069433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.069468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.069597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.069630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.069770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.069803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.069912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.069946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.070052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.070086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.070197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.070249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.070434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.070475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.070606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.070639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.070846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.070879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.071089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.071123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.071296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.071329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.071570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.071603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.071783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.071817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.072019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.072052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.072160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.072192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.072320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.072354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.072615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.072648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.072851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.072885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-12-10 05:55:45.073126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-12-10 05:55:45.073159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.073288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.073322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.073516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.073548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.073692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.073725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.073855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.073888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.074003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.074035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.074274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.074309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.074441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.074473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.074609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.074641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.074904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.074939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.075127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.075160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.075339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.075372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.075504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.075538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.075643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.075676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.075813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.075846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.076070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.076103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.076314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.076348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.076518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.076671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.076705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.076889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.076924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.077046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.077079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.077253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.077403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.077435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.077651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.077684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.077893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.077925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.078048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.078081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.078318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.078352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.078543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.078574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.078754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.078786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.078985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.079020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.079196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.079237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.079369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.079401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.079568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.079600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.079717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.079752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.079884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.079917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.080095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.080130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.080311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.080347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.080478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.080511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-12-10 05:55:45.080687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-12-10 05:55:45.080720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.080967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.081000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.081129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.081162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.081356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.081389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.081518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.081551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.081766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.081798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.081913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.081946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.082063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.082096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.082297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.082332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.082542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.082575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.082817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.082850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.083032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.083064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.083246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.083281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.083520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.083552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.083658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.083691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.083805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.083838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.083967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.084000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.084174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.084213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.084374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.084407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.084580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.084612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.084786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.084820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.084930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.084962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.085076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.085109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.085297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.085330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.085446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.085480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.085650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.085681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.085792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.085824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.085999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.086033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.086245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.086280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.086392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.086425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.086658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.086692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.086913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.086946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.087133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.087166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.087469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.087503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.087675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.087707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.087813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.087847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.088033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.088067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-12-10 05:55:45.088247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-12-10 05:55:45.088281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.088456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.088490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.088595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.088627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.088863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.088896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.089064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.089097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.089280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.089313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.089603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.089636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.089775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.089808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.090094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.090127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.090260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.090294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.090513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.090545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.090672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.090706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.090820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.090852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.091027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.091060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.091264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.091298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.091513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.091547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.091670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.091703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.091827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.091859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.092033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.092067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.092241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.092274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.092466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.092504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.092700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.092733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.092843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.092878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.093077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.093110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.093279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.093312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.093529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.093641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.093673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.093890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.093923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.094191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.094252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.094374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.094407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.094550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.094583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-12-10 05:55:45.094758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-12-10 05:55:45.094792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.094967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.095000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.095141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.095173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.095302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.095337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.095570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.095604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.095744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.095777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.095965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.095998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.096187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.096231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.096363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.096396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.096520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.096554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.099012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.099068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.099373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.099413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.099562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.099598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.099784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.099819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.100000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.100034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.100281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.100316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.100430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.100463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.100599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.100632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.100766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.100799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.101061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.101096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.101206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.101250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.101428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.101462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.101654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.101688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.101824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.101857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.101966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.101998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.102140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.102178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.102327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.102362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.102606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.102639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.102828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.102861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.103031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.103071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.103275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.103312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-12-10 05:55:45.103431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-12-10 05:55:45.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.103638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.103672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.103949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.103983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.104163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.104197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.104322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.104354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.104538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.104573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.104691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.104726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.104847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.104880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.105008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.105044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.105284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.105319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.105509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.105542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.105674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.105709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.105902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.105935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.106048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.106082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.106245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.106358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.106390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.106501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.106535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.106721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.106754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.106873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.106905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.107027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.107061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.107330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.107365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.107485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.107515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.107683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.107714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.107829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.107860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-12-10 05:55:45.108027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-12-10 05:55:45.108058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.108234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.108265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.108444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.108474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.108575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.108605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.108778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.108809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.109045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.109075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.109192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.109234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.109414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.109445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.109612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.109642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.109877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.109907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.110101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.110131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.110245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.110276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.110390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.110421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.110541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.110572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.110732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.110768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.110979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.111009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.111117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.111148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.111329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.111361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.111540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.111572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.111786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.111816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.111923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.111954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.112055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.112085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.112211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.112273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-12-10 05:55:45.112393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-12-10 05:55:45.112426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.112663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.112696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.112834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.112869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.113110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.113143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.113336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.113370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.113503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.113534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.113653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.113683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.113870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.113900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.114017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.114048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.114163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.114194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.114438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.114470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.114572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.114603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.114711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.114741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.114999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.115029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.115142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.115173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.115438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.115471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.115664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.115693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.227 [2024-12-10 05:55:45.115807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.227 [2024-12-10 05:55:45.115838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.227 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.116019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.116050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.116241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.116273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.116384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.116414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.116529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.116560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.116659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.116689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.116935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.116966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.117079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.117109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.117283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.117314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.117431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.117460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.117579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.117609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.117779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.117811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.117927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.117959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.118172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.118203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.118327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.118362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.118566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.118596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.118723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.118754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.118925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.118956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.119208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.119268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.228 qpair failed and we were unable to recover it. 00:30:27.228 [2024-12-10 05:55:45.119473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.228 [2024-12-10 05:55:45.119507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.119723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.119949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.119981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.120176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.120210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.120413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.120444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.120558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.120588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.120702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.120733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.120875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.120906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.121101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.121133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.121321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.121368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.121547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.121582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.121721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.121752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.121931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.121964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.122076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.122116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.122244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.229 [2024-12-10 05:55:45.122277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.229 qpair failed and we were unable to recover it. 00:30:27.229 [2024-12-10 05:55:45.122453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.122484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.122676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.122705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.122881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.122910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.123086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.123118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.123317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.123352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.123486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.123524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.123794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.123826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.230 [2024-12-10 05:55:45.123938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.230 [2024-12-10 05:55:45.123970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.230 qpair failed and we were unable to recover it. 00:30:27.516 [2024-12-10 05:55:45.124139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.516 [2024-12-10 05:55:45.124170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.516 qpair failed and we were unable to recover it. 00:30:27.516 [2024-12-10 05:55:45.124349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.516 [2024-12-10 05:55:45.124380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.516 qpair failed and we were unable to recover it. 00:30:27.516 [2024-12-10 05:55:45.124496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.124527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.124711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.124741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.124841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.124870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.125055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.125086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.125269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.125303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.125538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.125568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.125709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.125877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.125908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.126022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.126064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.126344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.126392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.126601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.126653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.126798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.126842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.127033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.127087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.127254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.127301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.127518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.127568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.127710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.127756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.127956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.128006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.128160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.128205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.128399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.128446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.128656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.128705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.129023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.129235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.129282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.129436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.129481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.129621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.129668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.129924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.129971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.130126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.130170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.130386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.130435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.130560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.130593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.130773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.130807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.130995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.131027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.131157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.131190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.131412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.131447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.131573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.131726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.131757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.131946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.131979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.132156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.132189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.132434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.517 [2024-12-10 05:55:45.132507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.517 qpair failed and we were unable to recover it. 00:30:27.517 [2024-12-10 05:55:45.132740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.132780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.132990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.133026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.133136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.133170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.133312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.133348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.133587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.133621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.133822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.133855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.134026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.134059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.134260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.134294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.134549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.134826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.134861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.135052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.135084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.135354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.135389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.135580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.135614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.135809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.135842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.135962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.135995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.136178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.136210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.136346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.136381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.136491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.136524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.136708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.136742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.136917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.136950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.137121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.137156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.137344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.137379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.137590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.137624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.137756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.137789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.138054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.138087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.138209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.138256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.138377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.138409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.138544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.138583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.138759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3d460 is same with the state(6) to be set 00:30:27.518 [2024-12-10 05:55:45.139130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.139201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.139450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.139487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.139685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.139721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.139904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.139937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.140178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.140212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.140408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.140441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.140578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.140610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.140792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.140826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.518 [2024-12-10 05:55:45.141112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.518 [2024-12-10 05:55:45.141146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.518 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.141333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.141367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.141559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.141591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.141766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.141799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.141941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.141973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.142169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.142201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.142453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.142486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.142612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.142643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.142837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.142869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.143064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.143098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.143299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.143332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.143527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.143558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.143683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.143717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.143907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.143939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.144125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.144157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.144406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.144440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.144561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.144595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.144850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.144889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.145068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.145101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.145309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.145344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.145531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.145564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.145690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.145722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.145913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.145947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.146066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.146100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.146338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.146371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.146548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.146704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.146736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.146978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.147010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.147130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.147163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.147358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.147394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.147502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.147535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.147680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.147711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.147955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.147988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.148240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.148276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.148462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.148494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.148695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.148729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.148940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.148975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.149215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.149261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.149450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.149483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.519 qpair failed and we were unable to recover it. 00:30:27.519 [2024-12-10 05:55:45.149671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.519 [2024-12-10 05:55:45.149705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.149888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.149921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.150094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.150127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.150248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.150285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.150499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.150532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.150725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.150759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.150885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.151095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.151127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.151251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.151286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.151478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.151511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.151626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.151658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.151841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.151876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.152057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.152091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.152201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.152250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.152426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.152458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.152644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.152678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.152876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.152912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.153166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.153199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.153315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.153353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.153530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.153563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.153830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.153862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.154037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.154071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.154209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.154256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.154462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.154590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.154624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.154740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.154775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.154961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.154996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.155103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.155135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.155332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.155366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.155504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.155537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.155663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.155695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.155938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.155970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.156149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.156182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.156453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.156489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.156686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.156719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.156837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.156871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.156989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.157023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.157214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.520 [2024-12-10 05:55:45.157256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-12-10 05:55:45.157369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.157403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.157604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.157637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.157826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.157861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.158034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.158067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.158176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.158208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.158346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.158379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.158637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.158669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.158844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.158916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.159120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.159158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.159370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.159405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.159611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.159645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.159905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.159939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.160108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.160141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.160341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.160378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.160491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.160525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.160702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.160734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.160976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.161010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.161246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.161279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.161493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.161524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.161643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.161678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.161797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.161830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.162030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.162063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.162186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.162246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.162430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.162465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.162585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.162617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.162758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.162792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.162965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.162996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.163183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.163229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.163472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.163505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.163697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.163729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-12-10 05:55:45.163934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.521 [2024-12-10 05:55:45.163966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.164178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.164210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.164422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.164455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.164646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.164679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.164869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.164906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.165031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.165063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.165266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.165300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.165497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.165531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.165657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.165691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.165863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.165895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.166022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.166056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.166186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.166226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.166409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.166443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.166549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.166582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.166764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.166798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.166980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.167012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.167131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.167163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.167304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.167345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.167463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.167497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.167682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.167715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.167898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.167932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.168076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.168108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.168309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.168345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.168582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.168615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.168722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.168756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.168924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.168959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.169144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.169178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.169316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.169350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.169538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.169571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.169762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.169797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.169968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.170002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.170190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.170230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.170358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.170392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.170511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.170544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.170667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.170701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.170836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.170871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-12-10 05:55:45.171067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.522 [2024-12-10 05:55:45.171102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.171280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.171315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.171491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.171523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.171713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.171747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.171953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.171989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.172122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.172154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.172271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.172306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.172572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.172606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.172791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.172831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.173074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.173109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.173359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.173394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.173587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.173620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.173815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.173847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.174037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.174070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.174264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.174299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.174477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.174509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.174709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.174742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.174927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.174960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.175136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.175170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.175360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.175395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.175647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.175682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.175785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.175817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.175995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.176133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.176167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.176361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.176398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.176518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.176550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.176731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.176766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.177014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.177046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.177233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.177268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.177451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.177486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.177603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.177634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.177821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.177854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.178061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.178096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.178311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.178347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.178561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.178595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.178741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.178774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.178889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.178923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.179111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.523 [2024-12-10 05:55:45.179144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.523 qpair failed and we were unable to recover it. 00:30:27.523 [2024-12-10 05:55:45.179319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.179353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.179462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.179494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.179695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.179729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.179901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.179934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.180055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.180089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.180328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.180399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.180631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.180666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.180862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.180895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.181151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.181184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.181380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.181414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.181549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.181591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.181704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.181857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.181890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.182119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.182152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.182418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.182569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.182601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.182747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.182781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.182910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.182942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.183129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.183163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.183288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.183323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.183499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.183531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.183659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.183692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.183819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.183854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.184035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.184067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.184342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.184377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.184493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.184527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.184715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.184747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.184882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.184914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.185108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.185142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.185329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.185363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.185532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.185564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.185746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.185778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.185963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.185996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.186121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.186153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.186332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.186367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.186497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.186529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.186740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.186774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.186964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.524 [2024-12-10 05:55:45.187002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.524 qpair failed and we were unable to recover it. 00:30:27.524 [2024-12-10 05:55:45.187117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.187151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.187345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.187379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.187632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.187665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.187772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.187806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.187960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.188152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.188185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.188320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.188355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.188535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.188568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.188751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.188784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.188970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.189004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.189118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.189152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.189325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.189358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.189487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.189519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.189703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.189737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.189908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.189941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.190073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.190105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.190233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.190268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.190386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.190419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.190521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.190552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.190724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.190758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.190950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.190983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.191130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.191162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.191350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.191383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.191560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.191592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.191765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.191797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.191972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.192005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.192174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.192205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.192343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.192377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.192570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.192602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.192773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.192805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.192919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.192952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.193064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.193096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.193277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.193312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.193489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.193521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.193648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.193682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.193809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.193842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.194033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.194065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.194250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.194282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.525 [2024-12-10 05:55:45.194453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.525 [2024-12-10 05:55:45.194485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.525 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.194730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.194761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.195018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.195052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.195175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.195208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.195348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.195381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.195500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.195531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.195643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.195676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.195806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.195840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.196012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.196045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.196236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.196271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.196393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.196425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.196665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.196697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.196803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.196835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.197009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.197042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.197228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.197261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.197530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.197562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.197753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.197785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.197914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.197948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.198076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.198107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.198226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.198259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.198422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.198530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.198561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.198755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.198790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.199035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.199069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.199198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.199243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.199427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.199459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.199600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.199633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.199872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.199904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.200018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.200050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.200257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.200521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.200554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.200737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.200771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.200898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.200931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.201123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.201155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.526 [2024-12-10 05:55:45.201338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.526 [2024-12-10 05:55:45.201373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.526 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.201555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.201586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.201780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.201812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.202002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.202035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.202202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.202244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.202358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.202390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.202571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.202604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.202726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.202758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.202870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.202902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.203169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.203202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.203388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.203421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.203543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.203575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.203685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.203717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.203891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.203923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.204092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.204125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.204237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.204272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.204376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.204407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.204596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.204628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.204824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.204857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.205113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.205145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.205409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.205443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.205622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.205654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.205832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.205878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.206037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.206069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.206274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.206307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.206478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.206510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.206693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.206868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.207105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.207138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.207330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.207364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.207484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.207517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.207644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.207676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.207930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.207962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.208155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.208187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.208343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.208377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.208550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.208582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.208834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.208867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.527 [2024-12-10 05:55:45.209071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.527 [2024-12-10 05:55:45.209105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.527 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.209281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.209315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.209490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.209523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.209691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.209724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.209990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.210022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.210157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.210324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.210357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.210619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.210651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.210835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.210870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.210994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.211026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.211255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.211289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.211540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.211573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.211688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.211726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.211907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.211940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.212225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.212258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.212473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.212506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.212626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.212658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.212790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.212823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.213010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.213044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.213237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.213271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.213397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.213433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.213608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.213640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.213747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.213779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.213921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.213955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.214128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.214160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.214381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.214416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.214594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.214628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.214817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.214850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.215030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.215064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.215174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.215206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.215364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.215398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.215583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.215615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.215815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.215848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.216055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.216245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.216280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.216507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.216538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.528 [2024-12-10 05:55:45.216806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.528 [2024-12-10 05:55:45.216840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.528 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.216958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.216990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.217119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.217151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.217279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.217315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.217492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.217524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.217627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.217658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.217874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.217907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.218038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.218072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.218246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.218280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.218483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.218516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.218703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.218736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.218917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.218949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.219072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.219104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.219355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.219387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.219560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.219593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.219698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.219731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.219837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.219869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.220099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.220171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.220461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.220501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.220632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.220665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.220778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.220813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.220990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.221023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.221212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.221259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.221557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.221590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.221787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.221821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.221932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.221966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.222101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.222307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.222341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.222580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.222612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.222787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.222995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.223027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.223212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.223255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.223445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.223478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.223657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.223689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.223871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.223905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.224080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.224113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.224252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.224288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.224541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.529 [2024-12-10 05:55:45.224574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.529 qpair failed and we were unable to recover it. 00:30:27.529 [2024-12-10 05:55:45.224712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.224746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.224989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.225021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.225205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.225249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.225371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.225404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.225577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.225610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.225803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.225837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.226042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.226076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.226265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.226299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.226542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.226573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.226749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.226781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.226961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.226994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.227249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.227285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.227409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.227442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.227625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.227658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.227785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.227819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.227945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.227979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.228150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.228184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.228435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.228470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.228739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.228772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.228890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.228928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.229169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.229202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.229459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.229493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.229614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.229646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.229890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.229924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.230166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.230199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.230390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.230423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.230615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.230650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.230994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.231102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.231135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.231382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.231416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.231528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.231560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.231732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.530 [2024-12-10 05:55:45.231765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.530 qpair failed and we were unable to recover it. 00:30:27.530 [2024-12-10 05:55:45.231887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.231920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.232049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.232083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.232239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.232274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.232461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.232494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.232618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.232653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.232766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.232800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.232932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.232965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.233238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.233273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.233449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.233482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.233612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.233645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.233828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.233862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.234038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.234174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.234207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.234414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.234447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.234633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.234668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.234862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.234894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.235093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.235126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.235306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.235342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.235517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.235550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.235725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.235758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.235999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.236032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.236163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.236195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.236407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.236441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.236575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.236609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.236796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.236828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.237005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.237037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.237169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.237201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.237319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.237359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.237480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.237513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.237636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.237670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.237858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.237893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.238132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.238164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.238490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.238691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.238724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.238835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.531 [2024-12-10 05:55:45.238868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.531 qpair failed and we were unable to recover it. 00:30:27.531 [2024-12-10 05:55:45.239105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.239138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.239422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.239456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.239609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.239643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.239867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.239900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.240117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.240149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.240390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.240423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.240617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.240650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.240917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.240950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.241123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.241155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.241287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.241320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.241510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.241543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.241657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.241690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.241840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.241873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.241996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.242029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.242239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.242273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.242393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.242426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.242549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.242582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.242691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.242723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.242846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.242878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.243068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.243103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.243300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.243335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.243513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.243545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.243747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.243780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.243918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.243949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.244090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.244123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.244312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.244349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.244476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.244508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.244639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.244672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.244863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.244895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.245068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.245100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.245389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.245425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.245663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.245876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.245915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.246051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.246084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.246190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.246232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.246475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.246508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.246771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.246803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.246928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.246960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.532 qpair failed and we were unable to recover it. 00:30:27.532 [2024-12-10 05:55:45.247101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.532 [2024-12-10 05:55:45.247135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.247282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.247316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.247501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.247534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.247659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.247692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.247864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.247897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.248028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.248061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.248248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.248284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.248544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.248577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.248768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.248803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.248934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.248967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.249088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.249122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.249240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.249273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.249400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.249433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.249615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.249650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.249833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.249865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.250060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.250093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.250272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.250307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.250432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.250464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.250664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.250697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.250889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.250922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.251030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.251063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.251184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.251227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.251350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.251382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.251506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.251538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.251797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.251830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.252110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.252142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.252268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.252302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.252490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.252523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.252784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.252817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.252942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.252975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.253188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.253228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.253469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.253503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.253607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.253639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.253837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.253872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.253977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.254015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.254300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.254334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.254510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.533 [2024-12-10 05:55:45.254664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.533 [2024-12-10 05:55:45.254698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.533 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.254886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.254919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.255089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.255123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.255308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.255341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.255536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.255569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.255783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.255818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.255986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.256019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.256158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.256191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.256373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.256407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.256670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.256703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.256906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.256938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.257059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.257092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.257239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.257274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.257425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.257600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.257765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.257797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.257986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.258020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.258212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.258258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.258372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.258406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.258671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.258705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.258890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.258924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.259164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.259196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.259416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.259449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.259704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.259738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.259926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.259960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.260089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.260120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.260270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.260305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.260510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.260543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.260651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.260684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.260881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.260915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.261162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.261193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.261321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.261354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.261533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.261567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.261700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.261732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.261867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.261901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.262017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.262049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.262160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.262192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.534 [2024-12-10 05:55:45.262456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.534 [2024-12-10 05:55:45.262497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.534 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.262626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.262658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.262832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.262864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.263124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.263156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.263338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.263371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.263618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.263650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.263773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.263806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.263929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.263961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.264201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.264247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.264377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.264410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.264526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.264557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.264813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.264845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.265014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.265049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.265156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.265188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.265392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.265425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.265551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.265584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.265713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.265746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.265921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.265953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.266110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.266143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.266318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.266353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.266545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.266579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.266717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.266749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.266944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.266975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.267093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.267126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.267413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.267449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.267635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.267667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.267789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.267823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.268099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.268134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.268308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.268343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.268462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.268495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.268738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.268771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.268890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.268928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.269053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.269084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.269259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.269291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.535 [2024-12-10 05:55:45.269537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.535 [2024-12-10 05:55:45.269570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.535 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.269769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.269801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.270008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.270042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.270165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.270198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.270395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.270602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.270633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.270820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.270860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.271043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.271076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.271320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.271354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.271550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.271583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.271756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.271789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.271918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.271952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.272175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.272209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.272432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.272465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.272572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.272605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.272786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.272820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.272945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.272977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.273100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.273133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.273339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.273373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.273572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.273606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.273728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.273763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.273896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.273931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.274171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.274204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.274317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.274350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.274531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.274564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.274767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.274799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.274980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.275013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.275138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.275171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.275327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.275360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.275579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.275710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.275744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.275920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.275952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.276071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.276103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.276265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.276301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.276499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.276532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.276817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.277018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.277052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.277234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.277267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.536 qpair failed and we were unable to recover it. 00:30:27.536 [2024-12-10 05:55:45.277410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.536 [2024-12-10 05:55:45.277443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.277630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.277663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.277923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.277955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.278090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.278121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.278391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.278424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.278604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.278636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.278774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.278806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.278988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.279022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.279144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.279181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.279378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.279412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.279554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.279588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.279693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.279724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.279905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.279936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.280208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.280349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.280381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.280574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.280607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.280729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.280764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.280945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.280977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.281172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.281206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.281405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.281612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.281646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.281935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.281968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.282089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.282124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.282298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.282333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.282505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.282538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.282674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.282706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.282902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.282935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.283045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.283078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.283326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.283358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.283480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.283514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.283642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.283675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.283784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.284002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.284035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.284216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.284275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.284464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.284498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.284683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.284716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.284908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.537 [2024-12-10 05:55:45.284942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.537 qpair failed and we were unable to recover it. 00:30:27.537 [2024-12-10 05:55:45.285059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.285093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.285208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.285254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.285360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.285393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.285502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.285719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.285751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.285858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.285890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.286023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.286055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.286161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.286194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.286337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.286370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.286475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.286508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.286737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.286771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.286943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.286981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.287240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.287275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.287469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.287504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.287635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.287668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.287801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.288026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.288060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.288252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.288288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.288477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.288510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.288694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.288727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.288837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.288870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.289055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.289088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.289318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.289353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.289597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.289632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.289805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.289839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.290020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.290054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.290298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.290332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.290454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.290487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.290669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.290701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.290817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.290853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.291046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.291078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.291265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.291299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.291473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.291507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.291612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.291645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.291856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.291888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.292111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.292145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.292300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 [2024-12-10 05:55:45.292440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.538 [2024-12-10 05:55:45.292472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.538 qpair failed and we were unable to recover it. 00:30:27.538 Read completed with error (sct=0, sc=8) 00:30:27.538 starting I/O failed 00:30:27.538 Read completed with error (sct=0, sc=8) 00:30:27.538 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Write completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 Read completed with error (sct=0, sc=8) 00:30:27.539 starting I/O failed 00:30:27.539 [2024-12-10 05:55:45.293140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.539 [2024-12-10 05:55:45.293324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.293382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.293575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.293612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.293851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.293883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.294059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.294092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.294213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.294263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.294385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.294418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.294589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.294630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.294828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.294862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.295000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.295237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.295272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.295395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.295426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.295565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.295598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.295780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.295812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.295930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.295963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.296088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.296120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.296299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.296335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.296583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.296617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.296789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.296820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.297026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.297058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.297250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.297286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.297428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.297462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.297666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.297699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.297959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.297991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.298289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.298323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.298505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.298537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.298649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.298680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.298875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.298908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.539 qpair failed and we were unable to recover it. 00:30:27.539 [2024-12-10 05:55:45.299191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.539 [2024-12-10 05:55:45.299235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.299380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.299415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.299609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.299642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.299756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.299788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.300043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.300077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.300203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.300249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.300371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.300404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.300530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.300562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.300747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.300780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.301023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.301056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.301198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.301247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.301378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.301411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.301586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.301618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.301817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.301849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.301973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.302006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.302132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.302165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.302361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.302395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.302529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.302563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.302758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.302792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.302906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.302938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.303208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.303251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.303380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.303414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.303617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.303650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.303783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.303815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.303988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.304021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.304157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.304189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.304331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.304366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.304542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.304574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.304746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.304779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.304888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.304922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.305044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.305076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.305199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.305244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.540 [2024-12-10 05:55:45.305370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.540 [2024-12-10 05:55:45.305405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.540 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.305538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.305571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.305706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.305740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.305938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.305971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.306165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.306196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.306397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.306431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.306617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.306649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.306764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.306796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.306996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.307028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.307144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.307176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.307315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.307348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.307488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.307521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.307648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.307682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.307858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.307892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.308001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.308039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.308157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.308188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.308387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.308420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.308545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.308578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.308753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.308785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.309073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.309107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.309258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.309295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.309489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.309522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.309637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.309670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.309801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.309834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.310009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.310042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.310238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.310272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.310466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.310500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.310630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.310662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.310843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.310876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.311014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.311047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.311164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.311195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.311410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.311443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.311618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.311650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.311840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.311874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.312080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.312113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.312390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.312425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.312710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.312743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.312939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.312973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.541 qpair failed and we were unable to recover it. 00:30:27.541 [2024-12-10 05:55:45.313089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.541 [2024-12-10 05:55:45.313122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.313327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.313362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.313467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.313501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.313688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.313720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.313904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.313937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.314126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.314160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.314292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.314327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.314588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.314621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.314807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.314839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.314969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.315003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.315128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.315163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.315298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.315331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.315521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.315555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.315678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.315711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.315845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.315878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.316001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.316034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.316148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.316187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.316402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.316437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.316610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.316642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.316820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.316853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.316971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.317005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.317184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.317229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.317368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.317403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.317532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.317567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.317680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.317714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.317981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.318014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.318126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.318160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.318310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.318344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.318463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.318496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.318689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.318722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.318966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.319001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.319188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.319233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.319434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.319467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.319589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.319621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.319869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.319902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.320011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.320043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.320148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.320181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.320385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.542 [2024-12-10 05:55:45.320420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.542 qpair failed and we were unable to recover it. 00:30:27.542 [2024-12-10 05:55:45.320531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.320564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.320755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.320789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.320970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.321004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.321177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.321211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.321341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.321375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.321564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.321598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.321779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.321812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.321926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.321959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.322077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.322111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.322266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.322302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.322503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.322538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.322719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.322752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.322921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.322956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.323077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.323111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.323302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.323337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.323531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.323565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.323830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.323864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.323993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.324027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.324230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.324271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.324412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.324587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.324621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.324808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.324841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.324971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.325003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.325195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.325236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.325497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.325531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.325657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.325691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.325875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.325909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.326117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.326151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.326337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.326370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.326552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.326585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.326792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.326826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.327010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.327043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.327239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.327275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.327449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.327483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.327669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.327700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.327820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.327854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.328026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.328060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.328338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.543 [2024-12-10 05:55:45.328373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.543 qpair failed and we were unable to recover it. 00:30:27.543 [2024-12-10 05:55:45.328639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.328674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.328781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.328815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.328967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.329002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.329254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.329288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.329472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.329505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.329648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.329682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.329866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.329899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.330106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.330139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.330333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.330368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.330501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.330535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.330722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.330755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.330958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.330993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.331176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.331209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.331427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.331459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.331588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.331622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.331731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.331764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.331870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.331903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.332087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.332120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.332291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.332325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.332438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.332472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.332653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.332691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.332872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.332906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.333098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.333132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.333269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.333304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.333436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.333471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.333600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.333631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.333755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.333789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.333905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.333938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.334076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.334108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.334350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.334384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.334496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.334531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.544 qpair failed and we were unable to recover it. 00:30:27.544 [2024-12-10 05:55:45.334722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.544 [2024-12-10 05:55:45.334753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.334955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.334990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.335102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.335134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.335255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.335290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.335401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.335434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.335567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.335601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.335723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.335756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.335951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.335985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.336099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.336131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.336273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.336308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.336438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.336471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.336648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.336682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.336886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.336919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.337029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.337063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.337166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.337200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.337423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.337456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.337679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.337712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.337827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.337861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.338044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.338077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.338273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.338307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.338442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.338475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.338610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.338644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.338764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.338797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.338977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.339010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.339136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.339170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.339441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.339476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.339684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.339719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.339911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.339944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.340195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.340240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.340382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.340420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.340550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.340584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.340753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.340788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.340968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.341002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.341188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.341232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.341357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.545 [2024-12-10 05:55:45.341390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.545 qpair failed and we were unable to recover it. 00:30:27.545 [2024-12-10 05:55:45.341522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.341556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.341766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.341798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.341933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.341967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.342083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.342115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.342290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.342325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.342505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.342650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.342683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.342800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.342833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.343030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.343063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.343274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.343308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.343486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.343519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.343703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.343737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.343980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.344014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.344122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.344360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.344395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.344515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.344548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.344789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.344822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.344948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.344983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.345157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.345190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.345344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.345378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.345488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.345522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.345713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.345746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.345969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.346002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.346261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.346296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.346504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.346539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.346662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.346696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.346866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.346901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.347086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.347119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.347235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.347270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.347392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.347425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.347598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.347631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.347823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.347856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.348097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.348130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.348260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.348294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.348408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.348447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.348627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.348660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.348802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.348836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.348948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.546 [2024-12-10 05:55:45.348981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.546 qpair failed and we were unable to recover it. 00:30:27.546 [2024-12-10 05:55:45.349154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.349187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.349378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.349414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.349518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.349551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.349818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.349852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.349966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.349999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.350102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.350135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.350263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.350297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.350504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.350537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.350803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.350837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.350968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.351000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.351144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.351178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.351415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.351450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.351724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.351757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.351862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.351895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.352107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.352140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.352257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.352294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.352475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.352509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.352629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.352662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.352786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.352819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.353131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.353163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.353290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.353323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.353447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.353481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.353593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.353625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.353745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.353779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.353952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.353985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.354238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.354274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.354509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.354542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.354669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.354702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.354885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.354917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.355060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.355093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.355240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.355275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.355459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.355492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.355751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.355784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.355887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.355920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.547 qpair failed and we were unable to recover it. 00:30:27.547 [2024-12-10 05:55:45.356041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.547 [2024-12-10 05:55:45.356075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.356306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.356523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.356562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.356668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.356702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.356875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.356908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.357087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.357120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.357331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.357366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.357475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.357509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.357632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.357779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.357812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.357964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.358146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.358179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.358310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.358345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.358517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.358552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.358680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.358714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.358955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.358988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.359109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.359141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.359326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.359362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.359554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.359588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.359709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.359741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.359864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.359896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.360026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.360060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.360198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.360242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.360354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.360390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.360527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.360561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.360684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.360717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.360848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.360882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.361068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.361101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.361231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.361265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.361395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.361429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.361620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.361652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.361824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.361858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.361976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.362011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.362199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.362253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.362400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.362433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.362676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.362710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.362913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.362947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.363138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.363365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.363400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.548 qpair failed and we were unable to recover it. 00:30:27.548 [2024-12-10 05:55:45.363592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.548 [2024-12-10 05:55:45.363625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.363801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.363834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.363954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.363986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.364101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.364141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.364269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.364304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.364484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.364517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.364692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.364724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.364899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.364934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.365121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.365153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.365263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.365536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.365570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.365694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.365728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.365898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.365933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.366041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.366075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.366324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.366357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.366480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.366512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.366702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.366736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.366869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.366901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.367010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.367044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.367165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.367199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.367349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.367383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.367505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.367540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.367664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.367697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.367820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.367853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.368114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.368147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.368287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.368322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.368461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.368494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.368682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.368716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.368900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.368933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.369117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.369150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.369390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.369425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.369547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.369579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.369774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.369806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.370006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.370041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.549 qpair failed and we were unable to recover it. 00:30:27.549 [2024-12-10 05:55:45.370232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.549 [2024-12-10 05:55:45.370266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.370442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.370476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.370664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.370698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.370882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.370915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.371099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.371132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.371312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.371345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.371477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.371512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.371711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.371744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.371926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.371960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.372096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.372136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.372270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.372303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.372533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.372716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.372750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.372929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.372962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.373208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.373256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.373364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.373398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.373571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.373605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.373790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.373822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.374009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.374044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.374236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.374271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.374389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.374422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.374627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.374660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.374913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.374946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.375163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.375197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.375382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.375416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.375527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.375560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.375755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.375789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.375979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.376012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.376133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.376167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.376312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.376348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.376460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.376493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.376624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.376658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.376848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.376882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.377007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.377040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.377143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.377175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.377395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.377432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.377590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.377663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.377909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.377947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.378191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.378242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.550 [2024-12-10 05:55:45.378433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.550 [2024-12-10 05:55:45.378469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.550 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.379864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.379919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.380078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.380114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.380380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.380415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.380679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.380716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.380962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.380996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.381134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.381168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.381371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.381407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.381689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.381722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.381901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.381933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.382065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.382099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.382238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.382275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.382511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.382545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.382793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.382825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.382954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.382989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.383116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.383150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.383282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.383317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.383448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.383481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.383590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.383622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.383812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.383847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.384086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.384120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.384238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.384273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.384465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.384498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.384694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.384726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.384912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.384952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.385072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.385106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.385233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.385269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.385385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.385418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.385522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.385555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.385737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.385771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.385903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.385935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.386172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.386205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.386429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.386462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.386648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.386681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.386890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.386924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.387057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.387091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.387207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.387252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.387444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.387478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.387601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.387632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.387817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.551 [2024-12-10 05:55:45.387851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.551 qpair failed and we were unable to recover it. 00:30:27.551 [2024-12-10 05:55:45.388072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.388104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.388291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.388326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.388500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.388534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.388705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.388738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.388868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.388902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.389093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.389127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.389263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.389299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.389491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.389523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.389652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.389686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.389823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.389858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.389980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.390013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.390128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.390179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.390302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.390336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.390493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.390526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.390803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.390836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.391025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.391060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.391236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.391268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.391526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.391558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.391733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.391766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.391898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.391931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.392055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.392087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.392393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.392579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.392612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.393949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.394007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.397573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.397635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.397933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.397971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.398105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.398140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.398351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.398387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.398512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.398543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.398686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.398717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.398842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.398873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.398977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.399005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.399264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.399297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.399473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.399505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.399626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.399656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.399880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.399940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.400100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.400135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.400321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.552 [2024-12-10 05:55:45.400354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.552 qpair failed and we were unable to recover it. 00:30:27.552 [2024-12-10 05:55:45.400471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.400512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.400634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.400667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.400849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.400885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.401072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.401106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.401361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.401396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.401532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.401565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.401689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.401719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.401903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.401937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.402068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.402100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.402358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.402394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.402510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.402542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.402736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.402767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.402897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.402928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.403137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.403170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.405204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.405281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.405425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.405466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.405666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.405699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.405886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.405921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.406044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.406076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.406255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.406288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.406422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.406454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.406567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.406598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.406717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.406748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.406863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.406895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.407001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.407031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.407204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.407247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.407416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.407448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.407582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.407616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.407749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.407783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.407968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.408001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.408112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.408145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.408278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.408312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.408430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.408464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.408580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.408614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.408837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.408904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.409059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.409096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.409283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.553 [2024-12-10 05:55:45.409321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.553 qpair failed and we were unable to recover it. 00:30:27.553 [2024-12-10 05:55:45.409449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.409482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.409603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.409634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.409739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.409770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.409885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.409927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.410105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.410136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.410275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.410309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.410480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.410512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.410626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.410659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.410777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.410811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.410929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.410962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.411102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.411136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.411339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.411374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.411554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.411587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.411733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.411766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.411874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.411908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.412091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.412125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.412350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.412385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.412527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.412560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.412729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.412762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.412869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.412901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.413004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.413035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.413208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.413254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.413428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.413461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.413738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.413919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.413951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.414134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.414168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.414319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.414355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.414484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.414517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.416262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.416322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.416525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.416562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.416751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.416785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.416964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.416998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.417112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.417145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.554 qpair failed and we were unable to recover it. 00:30:27.554 [2024-12-10 05:55:45.417338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.554 [2024-12-10 05:55:45.417374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.417560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.417595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.417707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.417740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.417986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.418022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.418184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.418403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.418436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.419815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.420098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.420133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.420399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.420433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.420604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.420637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.420774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.420815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.420954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.420988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.421097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.421128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.421304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.421339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.421460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.421489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.421619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.421652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.421768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.421799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.422000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.422034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.422144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.422174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.422367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.422404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.422588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.422740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.422772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.422928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.422960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.423081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.423114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.423239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.423275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.423396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.423428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.423547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.423580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.423688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.423910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.423944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.424227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.424260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.424383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.424411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.424524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.424553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.424673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.424701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.424917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-12-10 05:55:45.424952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-12-10 05:55:45.425143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.425178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.425364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.425397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.425512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.425545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.425817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.425890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.426034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.426071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.426196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.426247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.426486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.426518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.426650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.426683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.426804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.426835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.426953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.426983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.427155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.427188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.427380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.427415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.427569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.427602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.427715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.427745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.427860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.427890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.428074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.428107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.428317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.428362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.428481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.428516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.428625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.428660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.428835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.428868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.429041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.429075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.429189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.429231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.429355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.429389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.429574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.429607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.429781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.429816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.429922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.429952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.430124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.430158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.430294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.430326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.430446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.430478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.430592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.430623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.430825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.430858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.430984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.431014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.431256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.431291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.431406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.431437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.431556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.431589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-12-10 05:55:45.431700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-12-10 05:55:45.431734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.431910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.431942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.432146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.432179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.432373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.432409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.432586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.432619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.432789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.432821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.432952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.433121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.433272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.433424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.433569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.433710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.433940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.433971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.434082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.434110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.434235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.434264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.434380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.434411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.434578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.434610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.434783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.434815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.434937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.434971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.435078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.435111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.435284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.435433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.435466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.435662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.435696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.435812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.435844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.435970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.436003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.436182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.436216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.436475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.436510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.436618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.436648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.436829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.436861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.436976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.437006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.437141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.437171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.437365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.437401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.437533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.437563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-12-10 05:55:45.437671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-12-10 05:55:45.437702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.437822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.437856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.438050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.438084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.438195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.438237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.438349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.438382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.438634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.438687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.438828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.438864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.439053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.439086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.439201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.439248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.439511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.439556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.439679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.439712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.439975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.440008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.440198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.440241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.440489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.440522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.440656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.440702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.440896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.440938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.441064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.441096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.441275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.441325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.441441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.441474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.441662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.441696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.441882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.441914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.442085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.442118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.442241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.442287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.442423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.442479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.442622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.442656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-12-10 05:55:45.442829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-12-10 05:55:45.442861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.443121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.443154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.443351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.443387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.443566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.443599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.443750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.443784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.443932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.443965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.444209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.444253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.444382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.444414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.444532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.444566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.444689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.444722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.444849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.444883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.444999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.445031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.445148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.445182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.445329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.445363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.445488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.445522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.840 [2024-12-10 05:55:45.445704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.840 [2024-12-10 05:55:45.445737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.840 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.445858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.445891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.446102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.446136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.446259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.446293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.446485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.446517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.446699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.446732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.446913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.447030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.447062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.447277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.447519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.447741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.447773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.447962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.447995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.448104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.448136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.448263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.448296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.448475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.448508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.448622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.448660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.448779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.448810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.448997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.449030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.449214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.449257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.449438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.449471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.449746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.449779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.449955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.449989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.450177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.450210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.450343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.450375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.450512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.450545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.450665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.450698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.450813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.450845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.450954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.450986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.451110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.451142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.451269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.451303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.451479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.451512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.451630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.451663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.453429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.453485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.453696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.453734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.453928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.453960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.454158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.454191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.454319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.454354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.454627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.454660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.454846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.454882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.841 qpair failed and we were unable to recover it. 00:30:27.841 [2024-12-10 05:55:45.454997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.841 [2024-12-10 05:55:45.455030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.455204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.455247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.455366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.455399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.455526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.455560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.455682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.455714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.457049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.457100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.457301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.457340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.457454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.457486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.457640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.457672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.457800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.457833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.458017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.458051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.458243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.458277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.458474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.458507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.458615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.458647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.458821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.458855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.458981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.459013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.459131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.459175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.459371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.459405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.459531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.459564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.459672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.459702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.459885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.460034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.460064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.460352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.460387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.460490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.460520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.460795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.460828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.461001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.461034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.461171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.461203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.461458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.461492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.462858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.462910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.463175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.463210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.464608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.464660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.464985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.465015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.466166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.466211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.466477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.466507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.842 qpair failed and we were unable to recover it. 00:30:27.842 [2024-12-10 05:55:45.466715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.842 [2024-12-10 05:55:45.466744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.466862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.466907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.467095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.467331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.467365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.467535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.467562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.467678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.467703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.467871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.467898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.467997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.468039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.468161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.468195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.468514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.468650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.468677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.468794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.468821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.470009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.470052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.470281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.470314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.470595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.470623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.470796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.470824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.471010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.471037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.471131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.471159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.471288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.471318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.471429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.471457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.471571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.471599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.471801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.471835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.472036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.472076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.472265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.472300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.472432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.472459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.472566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.472594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.472707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.472735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.472826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.472854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.473019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.473046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.473151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.473178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.473369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.473402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.473542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.473700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.473733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.473920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.473953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.474124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.474153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.474315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.843 [2024-12-10 05:55:45.474347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.843 qpair failed and we were unable to recover it. 00:30:27.843 [2024-12-10 05:55:45.474473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.474501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.474601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.474858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.474891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.475027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.475059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.475174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.475210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.475403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.475429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.475536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.475563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.475738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.475765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.475937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.475962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.476150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.476183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.476453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.476487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.476677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.476710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.476826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.476859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.477106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.477133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.477246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.477271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.477386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.477514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.477541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.477650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.477676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.477844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.477878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.479073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.479114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.479383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.479413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.479671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.479897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.479923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.480152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.480178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.480274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.480299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.480505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.480538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.480669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.480709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.480837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.480870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.481110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.481143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.481269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.481303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.481472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.481497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.481603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.481629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.481824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.481850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.482019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.482045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.482153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.482178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.482349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.482375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.844 qpair failed and we were unable to recover it. 00:30:27.844 [2024-12-10 05:55:45.482489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.844 [2024-12-10 05:55:45.482515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.482633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.482660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.482828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.482860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.482971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.483004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.483121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.483154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.483345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.483380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.483576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.483608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.483896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.483929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.484066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.484100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.484347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.484484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.484509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.484615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.484641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.484820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.484942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.484967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.485169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.485195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.485386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.485413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.485526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.485551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.485659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.485685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.485858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.485886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.486079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.486107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.486238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.486272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.486410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.486444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.486645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.486678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.486871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.486903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.487947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.487978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.488140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.488168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.845 [2024-12-10 05:55:45.488381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.845 [2024-12-10 05:55:45.488410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.845 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.488572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.488600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.490099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.490147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.490379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.490411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.490581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.490608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.491287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.491334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.491596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.491629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.491822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.491852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.492043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.492070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.492161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.492184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.492479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.492510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.492734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.492763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.492954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.492982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.493185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.493212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.493350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.493378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.493493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.493521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.493633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.493660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.493835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.493863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.493973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.494001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.494178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.494206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.494404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.494433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.494550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.494576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.494813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.494842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.494934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.494960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.495063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.495091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.495203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.495241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.495433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.495461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.495704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.495731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.495900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.495927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.496152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.496179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.496359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.496388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.496485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.496511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.496686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.496713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.496870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.496895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.496997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.846 [2024-12-10 05:55:45.497023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.846 qpair failed and we were unable to recover it. 00:30:27.846 [2024-12-10 05:55:45.497128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.497154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.497311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.497340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.497513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.497545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.497649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.497673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.497782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.497808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.497967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.497994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.498099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.498125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.498284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.498313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.498420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.498443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.498691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.498717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.498882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.498908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.499077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.499104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.499261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.499288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.499454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.499479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.499643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.499669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.499777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.499802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.499900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.499926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.500053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.500079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.500176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.500203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.500390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.500416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.500532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.500557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.500745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.500771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.500876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.500903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.501030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.501055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.501275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.501304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.501528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.501554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.501663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.501690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.501852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.501879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.501988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.502013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.502246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.502274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.502364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.502389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.502495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.502521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.502694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.502719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.502946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.502973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.503066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.503093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.503197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.503231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.503397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.503423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.503528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.503554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.503674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.847 [2024-12-10 05:55:45.503701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.847 qpair failed and we were unable to recover it. 00:30:27.847 [2024-12-10 05:55:45.503859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.503888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.504011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.504038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.504140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.504166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.504361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.504393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.504552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.504577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.504679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.504704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.504875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.504902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.505103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.505129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.505284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.505313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.505419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.505443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.505600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.505626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.505807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.505835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.505997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.506025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.506136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.506162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.506269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.506297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.506528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.506557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.506651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.506680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.506861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.506888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.507121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.507149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.507405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.507432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.507603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.507631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.507769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.507965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.507995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.508098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.508126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.508246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.508275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.508381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.508409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.508532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.508560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.508812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.508840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.508999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.509028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.509205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.509243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.509362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.509389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.509556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.509583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.509709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.509825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.509852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.509979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.510007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.510114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.848 [2024-12-10 05:55:45.510143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.848 qpair failed and we were unable to recover it. 00:30:27.848 [2024-12-10 05:55:45.510339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.510370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.510550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.510738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.510767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.511027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.511055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.511165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.511193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.511305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.511332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.511496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.511528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.511631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.511658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.511848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.511878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.512056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.512084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.512193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.512231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.512341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.512368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.512581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.512609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.512786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.512813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.512928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.513299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.513410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.513437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.513534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.513562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.513676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.513705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.513818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.513847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.514152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.514181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.514299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.514328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.514492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.514520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.514624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.514651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.514838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.514866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.515032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.515059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.515165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.515192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.515315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.515346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.515526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.515553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.515656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.515682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.515879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.515906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.516104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.516130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.516337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.516369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.516495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.516526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.849 qpair failed and we were unable to recover it. 00:30:27.849 [2024-12-10 05:55:45.516736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.849 [2024-12-10 05:55:45.516766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.516937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.516966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.517235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.517267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.517446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.517474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.517651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.517682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.517859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.517891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.518063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.518096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.518262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.518294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.518397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.518427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.518541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.518572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.518705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.518735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.518940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.518971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.519145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.519180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.519297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.519327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.519516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.519545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.519675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.519702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.519828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.519856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.520031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.520060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.520258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.520291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.520466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.520495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.520613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.520642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.520739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.520768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.520939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.520969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.521152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.521183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.521328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.521359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.521458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.521488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.521617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.521648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.521818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.521848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.522093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.522125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.522308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.522340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.522527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.522556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.522753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.522960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.522990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.523103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.523132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.523233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.523263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.523363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.523392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.523646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.523678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.850 qpair failed and we were unable to recover it. 00:30:27.850 [2024-12-10 05:55:45.523854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.850 [2024-12-10 05:55:45.523887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.524067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.524101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.524323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.524530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.524559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.524674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.524706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.524969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.524999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.525174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.525206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.525384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.525416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.525529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.525558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.525675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.525707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.525841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.525871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.525987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.526015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.526247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.526279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.526458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.526491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.526610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.526641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.526751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.526792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.527002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.527036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.527164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.527195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.527357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.527599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.527631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.527895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.527929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.528039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.528071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.528267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.528302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.528569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.528602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.528740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.528775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.529032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.529066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.529173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.529205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.529346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.529377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.529562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.529596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.529776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.529811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.529946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.529979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.530165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.530196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.530334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.530367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.530557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.530589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.530719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.530753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.530927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.530960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.531152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.531185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.531442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.851 [2024-12-10 05:55:45.531475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.851 qpair failed and we were unable to recover it. 00:30:27.851 [2024-12-10 05:55:45.531608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.531640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.531851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.531885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.532080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.532305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.532339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.532471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.532504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.532683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.532717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.532980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.533013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.533132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.533162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.533297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.533332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.533510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.533543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.533729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.533762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.533938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.533971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.534107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.534141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.534404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.534439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.534676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.534709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.534913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.534946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.535087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.535120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.535331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.535370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.535557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.535590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.535709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.535741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.535936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.535968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.536101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.536134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.536261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.536297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.536483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.536515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.536623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.536657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.536918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.536951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.537137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.537170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.537298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.537333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.537518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.537553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.537734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.537767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.537894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.537925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.538050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.538083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.538270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.538304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.538506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.538539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.538740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.538773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.538974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.539006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.539120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.539154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.539422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.539456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.539587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.852 [2024-12-10 05:55:45.539619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-10 05:55:45.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.539757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.539931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.539964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.540245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.540280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.540397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.540427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.540549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.540582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.540800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.540833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.541004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.541038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.541141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.541173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.541427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.541463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.541677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.541708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.541892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.541925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.542179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.542211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.542466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.542500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.542675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.542708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.542904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.542937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.543232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.543386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.543419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.543593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.543626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.543802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.543836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.544028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.544061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.544179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.544212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.544483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.544515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.544755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.544789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.545029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.545062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.545315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.545350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.545479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.545512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.545721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.545755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.545936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.545969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.546145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.546178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.546431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.546467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.546645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.546677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.546943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.546976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.547169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.547202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.547346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.547380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.547562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.547594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.547710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.547740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.547876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.547909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.548036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.853 [2024-12-10 05:55:45.548068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-10 05:55:45.548249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.548283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.548477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.548509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.548708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.548741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.548862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.548894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.549030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.549064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.549326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.549361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.549474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.549507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.549711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.549749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.549876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.549910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.550100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.550133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.550372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.550407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.550523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.550554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.550674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.550707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.550910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.550945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.551144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.551177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.551305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.551337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.551443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.551475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.551665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.551698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.551881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.551913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.552177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.552211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.552412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.552444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.552570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.552604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.552804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.552838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.552957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.552991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.553179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.553210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.553437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.553471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-10 05:55:45.553658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.854 [2024-12-10 05:55:45.553691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.553803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.553834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.554029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.554063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.554241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.554276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.554378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.554413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.554522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.554555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.554746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.554778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.554956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.554990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.555126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.555159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.555458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.555491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.555638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.555671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.555912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.555944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.556118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.556151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.556329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.556363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.556482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.556512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.556789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.556822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.557087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.557119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.557249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.557283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.557409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.557441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.557608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.557641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.557826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.557859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.557981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.558018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.558123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.558155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.558297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.558331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.558580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.558614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.558799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.558832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.559044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.559078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.559199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.559243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.559350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.559382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.559553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.559585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.559767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.559800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.559935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.559967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.560209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.855 [2024-12-10 05:55:45.560265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.855 qpair failed and we were unable to recover it. 00:30:27.855 [2024-12-10 05:55:45.560444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.560477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.560742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.560775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.560898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.560932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.561107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.561141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.561274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.561310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.561501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.561533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.561717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.561751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.561961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.561993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.562181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.562214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.562364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.562398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.562658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.562691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.562866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.562899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.563076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.563110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.563240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.563274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.563406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.563439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.563682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.563715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.563900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.563933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.564111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.564144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.564333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.564367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.564605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.564638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.564761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.564794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.564999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.565031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.565151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.565183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.565376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.565409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.565601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.565633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.565749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.565957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.565989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.566180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.856 [2024-12-10 05:55:45.566213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.856 qpair failed and we were unable to recover it. 00:30:27.856 [2024-12-10 05:55:45.566342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.566379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.566644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.566678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.566804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.566837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.567032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.567065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.567180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.567213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.567401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.567433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.567609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.567642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.567757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.567790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.567965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.567997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.568145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.568179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.568456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.568490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.568611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.568643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.568882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.568915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.569093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.569127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.569327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.569363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.569605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.569637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.569766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.569799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.569917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.569949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.570117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.570151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.570278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.570312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.570441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.570473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.570579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.570611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.570791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.570824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.571053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.571199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.571412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.571552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.571694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.571847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.857 [2024-12-10 05:55:45.571979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.857 [2024-12-10 05:55:45.572012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.857 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.572260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.572472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.572505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.572717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.572749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.572920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.572952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.573064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.573098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.573340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.573375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.573490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.573522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.573797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.573975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.574008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.574185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.574232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.574441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.574478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.574653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.574686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.574951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.574983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.575249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.575283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.575414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.575447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.575707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.575739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.575861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.575893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.576147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.576179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.576316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.576350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.576530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.576562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.576734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.576900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.576934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.577048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.577080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.577246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.577423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.577455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.577567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.577601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.577781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.577814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.578049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.578081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.578344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.578379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.858 qpair failed and we were unable to recover it. 00:30:27.858 [2024-12-10 05:55:45.578557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.858 [2024-12-10 05:55:45.578589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.578844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.578876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.579045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.579079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.579345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.579379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.579641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.579673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.579856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.579889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.580062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.580095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.580234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.580267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.580416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.580450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.580646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.580679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.580918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.580950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.581130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.581164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.581463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.581497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.581707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.581739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.581919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.581952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.582129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.582162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.582358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.582401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.582592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.582624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.582744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.582776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.582994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.583187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.583230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.583406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.583444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.583625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.583657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.583829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.583861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.584113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.584147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.584340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.584374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.584558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.584592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.584767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.584800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.584988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.585020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.585145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.859 [2024-12-10 05:55:45.585178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.859 qpair failed and we were unable to recover it. 00:30:27.859 [2024-12-10 05:55:45.585374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.585408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.585592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.585624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.585794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.585828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.586065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.586098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.586300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.586345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.586526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.586559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.586741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.586774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.586973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.587006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.587211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.587253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.587360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.587393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.587591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.587623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.587892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.587925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.588111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.588326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.588361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.588477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.588508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.588624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.588656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.588851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.588883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.589092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.589125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.589303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.589339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.589545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.589577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.589691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.589722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.589899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.589933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.590116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.590148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.590341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.590375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.590644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.860 [2024-12-10 05:55:45.590676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.860 qpair failed and we were unable to recover it. 00:30:27.860 [2024-12-10 05:55:45.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.590829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.590955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.590987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.591167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.591200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.591387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.591419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.591600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.591633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.591831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.591864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.592041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.592080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.592336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.592370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.592554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.592587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.592714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.592746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.592928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.592961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.593074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.593107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.593238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.593271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.593411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.593444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.593710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.593743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.593938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.593970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.594246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.594281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.594465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.594498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.594703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.594737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.594975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.595007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.595285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.595320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.595593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.595627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.595810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.595843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.596031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.596064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.596304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.596339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.596579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.596611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.596795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.596827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.597037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.597150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.597181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.597382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.597416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.861 [2024-12-10 05:55:45.597613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.861 [2024-12-10 05:55:45.597645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.861 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.597887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.597920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.598101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.598134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.598313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.598347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.598597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.598629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.598745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.598778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.598896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.598928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.599132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.599165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.599423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.599456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.599628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.599660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.599800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.599832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.600020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.600051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.600317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.600351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.600481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.600513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.600642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.600675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.600804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.600837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.600960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.600997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.601110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.601143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.601268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.601302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.601485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.601518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.601642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.601676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.601797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.601828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.602011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.602043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.602235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.602270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.602389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.602420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.602637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.602669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.602839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.602872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.603057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.603090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.603358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.603393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.603537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.603666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.603699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.603818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.603851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.604032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.604065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.604255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.604290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.604411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.604442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.862 [2024-12-10 05:55:45.604728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.862 [2024-12-10 05:55:45.604762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.862 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.604929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.604961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.605085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.605116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.605302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.605337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.605629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.605662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.605832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.605865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.606128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.606161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.606366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.606400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.606523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.606556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.606685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.606718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.606963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.606996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.607181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.607213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.607336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.607368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.607551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.607583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.607793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.607824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.608002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.608034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.608207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.608251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.608370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.608402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.608525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.608557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.608726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.608759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.608941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.608974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.609160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.609199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.609362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.609395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.609516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.609548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.609736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.609770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.609952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.609984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.610108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.610141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.610352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.610388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.610509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.610541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.610837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.610956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.610988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.611102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.611135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.611327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.611361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.611543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.611576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.611754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.611787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.611969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.612002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.612122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.612155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.612352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.612386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.612591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.863 [2024-12-10 05:55:45.612625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.863 qpair failed and we were unable to recover it. 00:30:27.863 [2024-12-10 05:55:45.612748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.612779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.612960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.612993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.613238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.613272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.613402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.613433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.613533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.613564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.613678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.613710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.613829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.613860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.614035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.614067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.614248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.614283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.614466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.614500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.614612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.614643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.614897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.614932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.615064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.615097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.615320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.615355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.615538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.615570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.615750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.615783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.615897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.615929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.616111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.616143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.616335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.616369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.616555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.616588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.616780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.616814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.616933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.616965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.617204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.617447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.617478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.617590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.617622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.617766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.617799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.617921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.617953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.618192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.618234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.618491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.618522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.618770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.618804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.619020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.619052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.619245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.619278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.619468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.619500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.864 [2024-12-10 05:55:45.619683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.864 [2024-12-10 05:55:45.619714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.864 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.619843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.619877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.620052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.620084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.620277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.620311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.620501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.620533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.620648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.620679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.620867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.620901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.621030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.621061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.621175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.621206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.621403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.621436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.621617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.621650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.621819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.622062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.622095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.622382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.622416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.622590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.622622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.622759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.622792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.622927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.622959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.623145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.623178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.623308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.623343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.623474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.623505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.623694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.623727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.623915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.623947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.624071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.624104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.624206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.624248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.624421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.624452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.624565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.624597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.624705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.624739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.624945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.624978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.625215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.625273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.625468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.625505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.625690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.625722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.625911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.625943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.626148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.626180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.626494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.865 [2024-12-10 05:55:45.626530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.865 qpair failed and we were unable to recover it. 00:30:27.865 [2024-12-10 05:55:45.626801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.626833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.627117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.627151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.627274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.627309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.627420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.627652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.627686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.627861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.627893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.628098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.628131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.628372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.628406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.628515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.628546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.628670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.628702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.628889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.628923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.629102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.629135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.629404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.629438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.629552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.629584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.629769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.629801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.629936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.629967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.630099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.630132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.630313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.630347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.630541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.630572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.630678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.630710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.630882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.630916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.631023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.631055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.631238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.631313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.631579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.631617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.631806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.631840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.632021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.632054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.632268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.632304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.632504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.632537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.632653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.632685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.632805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.632837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.633013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.633044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.633171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.633205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.633342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.633375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.633557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.633589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.633869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.633901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.634032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.634361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.866 [2024-12-10 05:55:45.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.866 qpair failed and we were unable to recover it. 00:30:27.866 [2024-12-10 05:55:45.634605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.634638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.634826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.634858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.635058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.635090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.635228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.635262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.635444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.635476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.635735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.635768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.636032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.636180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.636213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.636355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.636387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.636564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.636597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.636733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.636764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.636868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.637164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.637203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.637420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.637452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.637574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.637605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.637712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.637745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.637917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.637948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.638069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.638101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.638285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.638319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.638585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.638617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.638835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.639042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.639073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.639335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.639367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.639575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.639607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.639789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.639821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.640001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.640034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.640166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.640199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.640383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.640417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.640678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.640710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.640820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.640852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.640985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.641018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.641166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.641379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.641414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.641612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.641644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.641776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.641808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.641985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.642017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.642238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.642273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.642400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.642432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.642600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.867 [2024-12-10 05:55:45.642633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.867 qpair failed and we were unable to recover it. 00:30:27.867 [2024-12-10 05:55:45.642745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.642783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.642897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.642929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.643120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.643152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.643276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.643310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.643492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.643524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.643722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.643755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.643934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.643967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.644252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.644286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.644545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.644579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.644845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.644877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.645059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.645091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.645273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.645307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.645438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.645471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.645737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.645769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.645984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.646017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.646150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.646183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.646409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.646443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.646584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.646616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.646863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.646896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.647069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.647103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.647272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.647307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.647425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.647456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.647631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.647663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.647773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.647805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.647992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.648024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.648286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.648331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.648471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.648504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.648626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.648664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.648786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.648818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.648929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.648961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.649270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.649302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.649525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.649558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.649695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.649727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.649917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.649950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.650210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.650253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.650448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.650482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.650717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.650748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.868 qpair failed and we were unable to recover it. 00:30:27.868 [2024-12-10 05:55:45.650919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.868 [2024-12-10 05:55:45.650953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.651176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.651210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.651340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.651372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.651483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.651516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.651696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.651729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.651851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.651884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.652144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.652176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.652392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.652425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.652667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.652699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.652821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.652854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.653042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.653073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.653185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.653239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.653480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.653512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.653752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.653784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.653961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.653993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.654195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.654239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.654429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.654460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.654660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.654692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.654884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.654917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.655094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.655321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.655354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.655481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.655514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.655694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.655726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.655846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.655877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.656073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.656106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.656279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.656312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.656437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.656469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.656589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.656621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.656806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.657018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.657050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.657310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.657343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.657481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.657514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.657639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.657671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.657851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.657882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.658052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.658084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.658292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.658325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.658443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.658476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.658683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.658715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.658836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.658870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-12-10 05:55:45.659079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.869 [2024-12-10 05:55:45.659112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.659285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.659319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.659502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.659534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.659825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.659858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.659969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.660001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.660115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.660147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.660268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.660302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.660542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.660574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.661049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.661082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.661216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.661267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.661393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.661425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.661561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.661594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.661776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.661808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.661976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.662009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.662215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.662259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.662437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.662469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.662576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.662608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.662729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.662761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.662895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.662932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.663136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.663169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.663432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.663467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.663703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.663735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.663955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.663988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.664111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.664144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.664335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.664369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.664562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.664594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.664769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.664801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.664984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.665017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.665280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.665314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.665515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.665546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.665725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.665757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.665886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.665918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.666192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.666235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.666486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.666518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-12-10 05:55:45.666714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.870 [2024-12-10 05:55:45.666747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.666928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.666960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.667228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.667262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.667448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.667480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.667617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.667650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.667825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.667857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.668094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.668127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.668298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.668332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.668525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.668558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.668680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.668712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.668838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.668870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.669070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.669108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.669295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.669328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.669520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.669552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.669684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.669716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.669932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.669964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.670088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.670120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.670306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.670339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.670576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.670609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.670848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.670880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.671054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.671086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.671267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.671300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.671412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.671443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.671554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.671587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.671783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.671815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.672023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.672056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.672269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.672481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.672512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.672631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.672664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.672782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.672815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.672994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.673026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.673195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.673237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.673361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.673393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.673565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.673597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.673700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.673733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.673920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.673952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.674203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.674244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.674360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.871 [2024-12-10 05:55:45.674393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-12-10 05:55:45.674595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.674628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.674925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.674957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.675160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.675192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.675374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.675406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.675648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.675680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.675789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.675821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.676048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.676080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.676336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.676370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.676574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.676757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.676789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.676908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.676941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.677121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.677153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.677281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.677314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.677585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.677617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.677957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.678029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.678321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.678360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.678495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.678530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.678633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.678665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.678855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.678888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.679092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.679125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.679298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.679331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.679533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.679567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.679693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.679725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.679963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.679996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.680169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.680200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.680460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.680494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.680619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.680651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.680838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.680879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.681054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.681087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.681270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.681303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.681540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.681573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.681688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.681721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.681960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.682197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.682241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.682458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.682491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.682674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.682705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.682873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.682905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.683084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.872 [2024-12-10 05:55:45.683115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.872 qpair failed and we were unable to recover it. 00:30:27.872 [2024-12-10 05:55:45.683328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.683362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.683486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.683518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.683693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.683725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.683848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.683881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.684116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.684148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.684349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.684382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.684520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.684553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.684791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.684824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.684944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.684977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.685160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.685192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.685391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.685424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.685545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.685577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.685840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.685872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.685996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.686029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.686202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.686245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.686384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.686416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.686705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.686777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.686979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.687016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.687293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.687331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.687615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.687648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.687837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.687870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.688072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.688105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.688303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.688338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.688529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.688562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.688751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.688783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.688902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.688935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.689111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.689143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.689387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.689421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.689688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.689721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.689853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.689895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.690156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.690188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.690396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.690433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.690697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.690729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.690854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.690886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.691064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.691096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.691273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.691306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.691495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.691527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.691712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.691745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.691922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.873 [2024-12-10 05:55:45.691954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.873 qpair failed and we were unable to recover it. 00:30:27.873 [2024-12-10 05:55:45.692134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.692166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.692362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.692396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.692583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.692615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.692739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.692771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.693039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.693072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.693209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.693251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.693436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.693469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.693586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.693619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.693809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.693841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.693970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.694002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.694269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.694302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.694421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.694453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.694711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.694743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.694862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.694895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.695011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.695043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.695147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.695179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.695430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.695464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.695649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.695682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.695797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.695829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.696004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.696036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.696171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.696204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.696432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.696465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.696643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.696675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.696845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.696877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.697008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.697041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.697162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.697194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.697436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.697468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.697707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.697739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.698002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.698034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.698146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.698178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.698300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.698334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.698512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.698718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.698751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.698937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.698969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.874 [2024-12-10 05:55:45.699163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.874 [2024-12-10 05:55:45.699195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.874 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.699397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.699430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.699533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.699566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.699694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.699727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.699852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.699885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.700147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.700180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.700308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.700342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.700600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.700633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.700869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.700901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.701080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.701113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.701385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.701419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.701591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.701623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.701749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.701781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.702026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.702059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.702245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.702279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.702542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.702575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.702748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.702779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.702963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.702996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.703244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.703278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.703514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.703546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.703788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.703821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.704054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.704086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.704272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.704305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.704510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.704547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.704661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.704693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.704952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.704987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.705237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.705483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.705516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.705704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.705737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.705983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.706016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.706155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.706188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.706380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.706414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.706631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.706664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.706882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.706915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.707102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.707135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.707313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.707347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.707542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.875 [2024-12-10 05:55:45.707576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.875 qpair failed and we were unable to recover it. 00:30:27.875 [2024-12-10 05:55:45.707851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.708080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.708113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.708289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.708323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.708445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.708477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.708760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.708793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.708927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.708961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.709170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.709202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.709346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.709381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.709668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.709700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.709939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.709971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.710182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.710215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.710419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.710452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.710702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.710735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.711043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.711075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.711344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.711378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.711638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.711669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.711841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.711874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.712081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.712114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.712299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.712331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.712526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.712559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.712795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.712828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.712964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.712996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.713262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.713296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.713484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.713517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.713714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.713746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.713966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.713998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.714200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.714249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.714360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.714393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.714571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.714603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.714787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.714819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.715078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.715112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.715297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.715331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.715511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.715543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.715725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.715757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.716028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.716061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.716191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.716231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.716493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.716525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.716700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.716733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.876 [2024-12-10 05:55:45.716838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.876 [2024-12-10 05:55:45.716869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.876 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.716985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.717018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.717146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.717179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.717384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.717417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.717668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.717700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.717961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.717995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.718280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.718315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.718519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.718552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.718789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.718822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.719068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.719100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.719351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.719461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.719495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.719619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.719651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.719835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.719868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.720036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.720069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.720255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.720290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.720486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.720518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.720689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.720722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.720990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.721022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.721192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.721232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.721473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.721507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.721638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.721671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.721908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.721941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.722057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.722089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.722348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.722381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.722555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.722587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.722825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.722858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.723030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.723062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.723239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.723280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.723460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.723493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.723667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.723699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.723904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.723937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.724106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.724140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.724256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.724289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.724555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.724587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.724777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.724808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.724916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.724947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.725212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.725263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.725531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.725564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.725837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.725869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.726155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.726188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.726351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.877 [2024-12-10 05:55:45.726384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.877 qpair failed and we were unable to recover it. 00:30:27.877 [2024-12-10 05:55:45.726657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.726691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.726880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.726914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.727085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.727118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.727267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.727301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.727539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.727572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.727762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.727794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.727995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.728028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.728262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.728295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.728483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.728515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.728704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.728736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.728911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.728944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.729137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.729170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.729381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.729415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.729593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.729626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.729796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.729829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.730007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.730040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.730326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.730360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.730600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.730773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.730805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.731015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.731048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.731165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.731199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.731448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.731481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.731745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.731777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.731963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.731998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.732177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.732210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.732414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.732447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.732709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.732748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.732982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.733015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.733303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.733337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.733580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.733613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.733786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.733819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.734078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.734110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.734346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.734379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.734552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.734584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.734758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.734791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.735079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.735112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.735359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.735392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.735562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.735595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.878 qpair failed and we were unable to recover it. 00:30:27.878 [2024-12-10 05:55:45.735790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.878 [2024-12-10 05:55:45.735823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.736001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.736034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.736155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.736188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.736451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.736484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.736723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.736923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.736955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.737161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.737193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.737410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.737444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.737733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.737765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.737951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.737985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.738165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.738396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.738429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.738599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.738632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.738873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.738906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.739209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.739254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.739443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.739476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.739657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.739690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.739893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.740190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.740234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.740486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.740519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.740801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.740833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.741009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.741041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.741306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.741341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.741462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.741495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.741645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.741678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.741930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.741962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.742197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.742238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.742430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.742462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.742728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.742766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.742936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.742969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.743236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.743270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.743442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.743475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.743738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.743770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.744030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.744062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.744299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.744333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.744587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.744617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.744868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.744899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.745109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.745139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.745405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.745440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.745671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.745703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.745971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.746003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.879 [2024-12-10 05:55:45.746283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.879 [2024-12-10 05:55:45.746316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.879 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.746597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.746631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.746907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.746940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.747154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.747186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.747381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.747416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.747650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.747684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.747942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.747977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.748095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.748126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.748370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.748403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.748667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.748702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.748989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.749022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.749314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.749348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.749613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.749645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.749787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.749991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.750023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.750281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.750316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.750555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.750588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.750775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.750809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.751051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.751084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.751205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.751247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.751484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.751516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.751696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.751730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.751992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.752025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.752211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.752253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.752495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.752721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.752754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.752894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.752925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.753138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.753177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.753377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.753411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.753676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.753709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.753898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.753931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.754120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.754152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.754416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.754451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.754736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.754767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.755042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.755074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.755356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.755578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.755611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.755781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.755814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.756012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.756045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.756167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.756200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.756518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.756551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.880 qpair failed and we were unable to recover it. 00:30:27.880 [2024-12-10 05:55:45.756748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.880 [2024-12-10 05:55:45.756782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.756904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.756935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.757173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.757206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.757392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.757425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.757606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.757638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.757821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.757854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.758113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.758427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.758461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.758655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.758690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.758861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.758894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.759077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.759111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.759252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.759286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.759489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.759522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.759793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.759826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.760012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.760045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.760239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.760273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.760451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.760483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.760726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.760758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.760932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.760965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.761134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.761167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.761306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.761337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.761621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.761654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.761892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.762129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.762163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.762358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.762392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.762566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.762600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.762786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.762825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.763087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.763119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.763380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.763414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.763667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.763700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.764002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.764154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.764401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.764435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.764716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.764749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.764989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.765023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.765212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.765256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.765446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.765479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.765753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.765788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.766068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.766101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.766350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.766384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.766540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.766571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.766845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.766902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.881 [2024-12-10 05:55:45.767206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.881 [2024-12-10 05:55:45.767261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.881 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.767450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.767486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.767661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.767697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.767917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.767951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.768216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.768265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.768535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.768571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.768846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.768896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.769177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.769250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.769526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.769561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.769779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.769816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.770061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.770095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.770290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.770326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:27.882 [2024-12-10 05:55:45.770507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.882 [2024-12-10 05:55:45.770542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:27.882 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.770788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.770843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.771107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.771153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.771463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.771535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.771721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.771768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.771985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.772034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.772209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.772278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.772462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.772508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.772761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.772811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.772978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.773027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.773311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.773369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.773587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.773629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.773829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.773889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.774087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.774133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.774431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.774471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.774684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.774722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.774918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.774963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.775182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.162 [2024-12-10 05:55:45.775239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.162 qpair failed and we were unable to recover it. 00:30:28.162 [2024-12-10 05:55:45.775431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.775472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.775654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.775702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.775952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.775988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.776302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.776341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.776620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.776671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.776913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.776946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.777264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.777303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.777427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.777460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.777653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.777686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.777958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.777992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.778285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.778321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.778584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.778617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.778904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.778939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.779188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.779231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.779505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.779538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.779753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.779787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.780029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.780064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.780307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.780343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.780574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.780607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.780875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.780908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.781104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.781138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.781626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.781659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.781902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.781937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.782179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.782213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.782420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.782454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.782643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.782676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.782927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.782963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.783151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.783184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.783394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.783429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.783605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.783638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.783909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.784210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.784258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.784438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.784471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.784728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.784767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.784970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.785005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.785190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.785252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.785500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.785534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.163 qpair failed and we were unable to recover it. 00:30:28.163 [2024-12-10 05:55:45.785731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.163 [2024-12-10 05:55:45.785766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.785945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.785980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.786245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.786280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.786564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.786599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.786787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.786821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.786953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.786989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.787170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.787203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.787408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.787444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.787709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.787743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.788051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.788085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.788336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.788373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.788660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.788694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.788960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.788995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.789187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.789245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.789520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.789554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.789730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.789764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.789909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.789944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.790189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.790235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.790350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.790384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.790593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.790626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.790845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.790879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.791058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.791092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.791338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.791374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.791570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.791603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.791852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.791888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.792135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.792169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.792429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.792464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.792642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.792678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.792892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.792927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.793103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.793137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.793388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.793424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.793618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.793650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.793896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.793930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.794121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.794155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.794379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.794413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.794606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.794639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.794813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.794854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.795091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.795341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.795376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.164 qpair failed and we were unable to recover it. 00:30:28.164 [2024-12-10 05:55:45.795572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.164 [2024-12-10 05:55:45.795605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.795849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.795883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.796134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.796169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.796469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.796505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.796791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.796826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.797090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.797125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.797284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.797322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.797519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.797553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.797725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.797759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.797906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.798088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.798122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.798403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.798438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.798688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.798723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.798897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.798931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.799149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.799183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.799337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.799373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.799509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.799544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.799657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.799689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.799917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.799951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.800081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.800115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.800383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.800419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.800575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.800796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.800830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.801018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.801051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.801192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.801238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.801481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.801514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.801693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.801727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.801861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.801894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.802017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.802051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.802236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.802271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.802457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.802490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.802768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.802805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.802919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.802954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.803174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.803208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.803346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.803380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.803568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.803601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.803731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.803767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.803897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.803937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.804115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.804150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.165 qpair failed and we were unable to recover it. 00:30:28.165 [2024-12-10 05:55:45.804336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.165 [2024-12-10 05:55:45.804371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.804557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.804591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.804717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.804752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.804876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.804910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.805200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.805257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.805445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.805480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.805597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.805771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.805805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.805981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.806016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.806203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.806252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.806441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.806474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.806655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.806690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.806875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.807105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.807139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.807260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.807295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.807561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.807593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.807807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.807841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.807961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.807995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.808173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.808207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.808405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.808439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.808552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.808585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.808773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.808806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.808997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.809031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.809156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.809190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.809413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.809451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.809578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.809618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.809812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.809845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.809971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.810005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.810208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.810257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.810401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.810436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.810612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.810646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.810923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.810957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.811137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.811171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.811456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.811491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.811694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.811728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.166 [2024-12-10 05:55:45.811946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.166 [2024-12-10 05:55:45.811979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.166 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.812170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.812205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.812326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.812362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.812541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.812580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.812788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.812823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.812950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.812983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.813173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.813207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.813425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.813459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.813733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.813767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.813970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.814004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.814185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.814231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.814431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.814464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.814646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.814679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.814818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.814851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.815040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.815075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.815189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.815235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.815382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.815417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.815577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.815609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.815729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.815763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.815893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.815927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.816122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.816154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.816447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.816500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.816688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.816721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.816862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.816896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.817013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.817046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.817173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.817206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.817527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.817637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.817671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.817866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.817899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.818128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.818161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.818368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.818402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.818523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.818556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.818775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.818809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.818929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.818964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.819088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.819120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.819315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.819351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.819605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.819639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.167 [2024-12-10 05:55:45.819772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.167 [2024-12-10 05:55:45.819807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.167 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.819945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.819978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.820246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.820282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.820460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.820492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.820745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.820779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.820911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.820944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.821263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.821298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.821426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.821460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.821606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.821639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.821760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.821794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.821995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.822154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.822187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.822389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.822424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.822622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.822656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.822880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.822915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.823105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.823138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.823361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.823396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.823576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.823609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.823726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.823759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.823877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.823910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.824136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.824169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.824372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.824407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.824652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.824686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.824888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.824923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.825061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.825094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.825204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.825248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.825521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.825554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.825759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.825792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.825916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.826136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.826169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.826443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.826477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.826659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.826691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.826818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.826852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.827042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.827080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.827284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.827319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.827430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.827462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.827602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.827653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.827778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.827809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.827993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.828027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.168 [2024-12-10 05:55:45.828228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.168 [2024-12-10 05:55:45.828264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.168 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.828505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.828540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.828653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.828685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.828875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.829098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.829131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.829313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.829348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.829476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.829511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.829683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.829718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.829966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.830000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.830109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.830143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.830252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.830286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.830526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.830558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.830688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.830721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.830844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.830875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.831055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.831088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.831272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.831308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.831446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.831480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.831717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.831751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.831931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.831964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.832070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.832102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.832298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.832334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.832529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.832562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.832801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.832835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.832969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.833003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.833125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.833159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.833368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.833402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.833508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.833541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.833811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.833845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.834034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.834067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.834360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.834396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.834590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.834623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.834796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.834829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.834945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.834977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.835261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.835296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.835490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.835528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.835747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.835782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.836044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.836077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.836266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.836300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.836454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.836487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.169 [2024-12-10 05:55:45.836674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.169 [2024-12-10 05:55:45.836708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.169 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.836955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.837000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.837266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.837301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.837485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.837519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.837667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.837932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.837976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.838309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.838357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.838567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.838601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.838794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.838827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.839082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.839115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.839239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.839276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.839535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.839568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.839741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.839776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.839971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.840005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.840178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.840212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.840485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.840520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.840706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.840738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.841011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.841046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.841294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.841573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.841604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.841851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.841885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.842060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.842093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.842288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.842324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.842517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.842549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.842746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.842780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.842994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.843028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.843272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.843307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.843570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.843604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.843873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.843907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.844100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.844134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.844385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.844421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.844708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.844742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.845008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.845042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.845332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.845366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.845603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.845637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.845790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.845829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.846018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.846051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.846337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.846374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.846554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.846587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.170 [2024-12-10 05:55:45.846730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.170 [2024-12-10 05:55:45.846762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.170 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.846898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.847204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.847248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.847378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.847410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.847684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.847717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.847954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.847988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.848141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.848175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.848394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.848428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.848614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.848648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.848913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.848945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.849129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.849164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.849433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.849467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.849668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.849703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.849894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.849928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.850106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.850139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.850336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.850372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.850659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.850693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.850897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.850930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.851173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.851207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.851411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.851445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.851733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.851768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.851959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.851994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.852184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.852228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.852346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.852379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.852616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.852647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.852776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.852810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.853073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.853106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.853395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.853429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.853698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.853731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.854018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.854051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.854178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.854208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.854487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.854520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.854703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.854737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.855044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.855078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.171 [2024-12-10 05:55:45.855262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.171 [2024-12-10 05:55:45.855298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.171 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.855473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.855506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.855770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.855808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.856013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.856048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.856238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.856273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.856470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.856502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.856701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.856735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.856925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.856958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.857132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.857166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.857423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.857457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.857668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.857700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.857888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.857923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.858108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.858141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.858347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.858381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.858490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.858521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.858722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.858756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.858957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.858990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.859260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.859296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.859475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.859509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.859774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.859808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.860074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.860110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.860327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.860363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.860558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.860592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.860859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.860893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.861023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.861053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.861247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.861282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.861471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.861504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.861770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.861803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.862087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.862122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.862406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.862440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.862686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.862720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.862904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.862937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.863179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.863212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.863435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.863469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.863702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.863735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.863947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.863981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.864251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.864288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.864516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.864548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.864827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.865029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.172 [2024-12-10 05:55:45.865064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.172 qpair failed and we were unable to recover it. 00:30:28.172 [2024-12-10 05:55:45.865239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.865273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.865472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.865504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.865749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.865790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.866080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.866114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.866390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.866424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.866677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.866709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.866835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.867065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.867100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.867359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.867394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.867585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.867618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.867802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.867837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.868110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.868145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.868416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.868452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.868710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.868743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.868955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.868990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.869181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.869214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.869531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.869565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.869803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.869837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.870029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.870331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.870365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.870553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.870587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.870709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.870741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.870982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.871015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.871261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.871295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.871539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.871573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.871793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.871827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.872036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.872071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.872272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.872308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.872553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.872586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.872733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.872767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.872986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.873019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.873315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.873350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.873560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.873593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.873731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.873764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.874034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.874067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.874176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.874208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.874493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.874526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.874797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.874831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.875101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.173 [2024-12-10 05:55:45.875133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.173 qpair failed and we were unable to recover it. 00:30:28.173 [2024-12-10 05:55:45.875328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.875364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.875501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.875534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.875807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.875841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.876103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.876142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.876459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.876496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.876707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.876742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.876852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.876884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.877074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.877109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.877363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.877397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.877577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.877609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.877914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.877948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.878215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.878261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.878541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.878574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.878844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.878879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.879065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.879098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.879281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.879315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.879532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.879566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.879839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.879873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.880011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.880046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.880250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.880285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.880450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.880659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.880693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.880884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.880917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.881109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.881144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.881268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.881300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.881546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.881583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.881878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.881911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.882192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.882245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.882445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.882480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.882724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.882757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.883034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.883068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.883340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.883374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.883631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.883665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.883961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.883995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.884257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.884292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.884474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.884507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.884800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.884834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.885126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.885159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.885379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.174 [2024-12-10 05:55:45.885415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.174 qpair failed and we were unable to recover it. 00:30:28.174 [2024-12-10 05:55:45.885603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.885636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.885883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.885918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.886209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.886252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.886451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.886486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.886676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.886715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.886898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.886931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.887147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.887180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.887373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.887407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.887602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.887636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.887825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.887858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.888123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.888157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.888497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.888532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.888802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.888836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.889100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.889134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.889450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.889485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.889672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.889705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.889976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.890009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.890279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.890313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.890605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.890639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.890817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.890851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.891124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.891157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.891429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.891465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.891748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.891782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.892061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.892097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.892309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.892344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.892607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.892641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.892857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.892890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.893075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.893110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.893308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.893344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.893531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.893563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.893812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.893846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.894104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.894138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.894328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.894363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.175 [2024-12-10 05:55:45.894634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.175 [2024-12-10 05:55:45.894668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.175 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.894944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.894977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.895187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.895231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.895422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.895455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.895722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.895757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.895890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.895923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.896121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.896155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.896414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.896660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.896693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.896832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.896867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.897135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.897170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.897369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.897410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.897705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.897738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.898044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.898077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.898333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.898369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.898492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.898525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.898794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.898828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.899123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.899157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.899444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.899480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.899750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.899783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.900049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.900083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.900302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.900338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.900586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.900620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.900794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.900828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.901074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.901107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.901398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.901434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.901729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.901764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.902027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.902060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.902356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.902390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.902589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.902623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.902803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.902836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.903057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.903093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.903284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.903318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.903500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.903534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.903728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.903761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.903960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.903993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.904278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.904315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.904519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.176 [2024-12-10 05:55:45.904550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.176 qpair failed and we were unable to recover it. 00:30:28.176 [2024-12-10 05:55:45.904757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.904791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.905036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.905070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.905385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.905420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.905693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.905727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.906009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.906043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.906259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.906294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.906473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.906508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.906701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.906737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.906941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.906975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.907231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.907267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.907562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.907595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.907905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.907940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.908231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.908268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.908531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.908572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.908878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.908914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.909173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.909209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.909468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.909502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.909797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.909832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.910100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.910133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.910356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.910392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.910584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.910617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.910872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.910907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.911122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.911157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.911352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.911386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.911657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.911691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.911965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.911999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.912286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.912322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.912591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.912624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.912913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.912946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.913152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.913186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.913469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.913503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.913770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.913805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.914101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.914137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.914427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.914462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.914660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.914694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.914989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.915023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.915205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.915250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.915384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.915418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.177 qpair failed and we were unable to recover it. 00:30:28.177 [2024-12-10 05:55:45.915687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.177 [2024-12-10 05:55:45.915722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.915922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.915958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.916165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.916201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.916514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.916548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.916816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.916852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.917049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.917084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.917286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.917321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.917626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.917660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.917926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.917961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.918211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.918279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.918470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.918505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.918773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.918808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.919061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.919249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.919286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.919551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.919586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.919770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.919809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.920017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.920051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.920193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.920239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.920421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.920457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.920668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.920702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.920900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.920933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.921113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.921145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.921276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.921310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.921580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.921615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.921736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.921769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.921951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.921984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.922115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.922149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.922337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.922372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.922628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.922663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.922918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.922954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.923167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.923201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.923502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.923537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.923802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.923836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.924057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.924090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.924364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.924400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.924614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.924647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.924874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.924908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.925164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.925200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.925490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.925526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.178 [2024-12-10 05:55:45.925670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.178 [2024-12-10 05:55:45.925704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.178 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.925976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.926009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.926198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.926245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.926511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.926547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.926800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.926834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.927030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.927065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.927339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.927375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.927662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.927711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.927972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.928006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.928194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.928240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.928507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.928542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.928744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.928778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.929038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.929074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.929262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.929298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.929505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.929538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.929806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.929842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.930051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.930092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.930319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.930353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.930627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.930663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.930879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.930913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.931140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.931173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.931391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.931429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.931699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.931734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.931916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.931950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.932158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.932193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.932482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.932516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.932789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.932824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.933058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.933093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.933367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.933404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.933590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.933627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.933835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.933871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.934151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.934402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.934440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.934626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.934659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.934913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.934948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.935155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.935189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.935345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.935586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.935620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.935912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.935949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.179 [2024-12-10 05:55:45.936105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.179 [2024-12-10 05:55:45.936140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.179 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.936324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.936361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.936499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.936534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.936739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.936774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.937006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.937040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.937260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.937295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.937581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.937616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.937834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.937868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.938014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.938047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.938241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.938277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.938480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.938515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.938720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.938756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.939057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.939092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.939239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.939274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.939410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.939446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.939654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.939688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.939897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.939933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.940185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.940238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.940378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.940413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.940604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.940638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.940890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.940926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.941035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.941071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.941209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.941272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.941527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.941561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.941745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.941783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.941922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.941957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.942086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.942119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.942339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.942377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.942653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.942687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.942903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.942936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.943226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.943261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.943456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.943493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.943720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.943753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.943884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.180 [2024-12-10 05:55:45.943917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.180 qpair failed and we were unable to recover it. 00:30:28.180 [2024-12-10 05:55:45.944041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.944076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.944234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.944271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.944396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.944431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.944648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.944683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.944803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.944836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.945018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.945053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.945276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.945311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.945499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.945534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.945678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.945713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.945941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.945976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.946163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.946201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.946422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.946457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.946733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.946766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.946914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.946947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.947079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.947112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.947336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.947373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.947600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.947722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.947756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.947883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.947919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.948123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.948156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.948313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.948350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.948485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.948520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.948830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.948864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.949130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.949170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.949300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.949335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.949607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.949644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.949784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.949819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.950101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.950137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.950331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.950367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.950678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.950711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.950891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.950925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.951110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.951143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.951347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.951383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.951664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.951698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.951974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.952006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.952241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.952274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.952494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.952523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.952729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.952761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.953038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.181 [2024-12-10 05:55:45.953069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.181 qpair failed and we were unable to recover it. 00:30:28.181 [2024-12-10 05:55:45.953329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.953361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.953558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.953590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.953858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.953891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.954140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.954173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.954443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.954474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.954755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.954787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.954972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.955004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.955275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.955312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.955437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.955468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.955672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.955705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.955885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.955917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.956143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.956180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.956406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.956696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.956731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.956986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.957021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.957203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.957389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.957423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.957617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.957653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.957922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.958234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.958269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.958557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.958590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.958797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.958830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.959034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.959069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.959297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.959333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.959596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.959631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.959931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.959965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.960239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.960274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.960553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.960587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.960869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.960902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.961085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.961119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.961383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.961421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.961541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.961575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.961872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.961906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.962170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.962206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.962363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.962399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.962527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.962561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.962756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.962790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.962991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.963028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.963257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.963295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.182 qpair failed and we were unable to recover it. 00:30:28.182 [2024-12-10 05:55:45.963505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.182 [2024-12-10 05:55:45.963538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.963794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.963828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.964131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.964167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.964465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.964503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.964657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.964692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.964992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.965027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.965248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.965284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.965572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.965606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.965877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.965912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.966196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.966240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.966424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.966458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.966759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.966793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.967062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.967104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.967385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.967421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.967605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.967640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.967904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.967939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.968123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.968158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.968388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.968424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.968626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.968660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.968795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.968829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.969070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.969104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.969379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.969413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.969691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.969726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.969938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.969974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.970191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.970250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.970422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.970610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.970645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.970851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.970885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.971074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.971107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.971303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.971338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.971566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.971599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.971734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.971770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.972049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.972084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.972338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.972372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.972593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.972627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.972901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.972935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.973232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.973268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.973563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.973597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.973851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.973884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.183 qpair failed and we were unable to recover it. 00:30:28.183 [2024-12-10 05:55:45.974169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.183 [2024-12-10 05:55:45.974205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.974533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.974567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.974844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.974878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.975068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.975102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.975307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.975342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.975550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.975585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.975717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.975752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.975976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.976011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.976310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.976346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.976606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.976639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.976913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.976946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.977236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.977273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.977547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.977580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.977877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.977923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.978112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.978146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.978305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.978340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.978566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.978598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.978895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.978929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.979201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.979252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.979451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.979485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.979668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.979703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.979979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.980014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.980136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.980170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.980379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.980414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.980618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.980653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.980856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.980891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.981089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.981123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.981326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.981362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.981482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.981515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.981700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.981734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.982007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.982041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.982233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.982269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.982527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.982561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.982763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.983064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.184 [2024-12-10 05:55:45.983098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.184 qpair failed and we were unable to recover it. 00:30:28.184 [2024-12-10 05:55:45.983390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.983424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.983703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.983737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.983922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.983957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.984242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.984277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.984534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.984568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.984846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.985068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.985103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.985214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.985273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.985457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.985490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.985703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.985738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.985935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.985969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.986230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.986265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.986402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.986436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.986587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.986621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.986761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.986796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.986912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.986947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.987151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.987185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.987504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.987539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.987798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.987839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.988126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.988160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.988420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.988456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.988758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.988792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.989105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.989140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.989422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.989457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.989712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.989747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.990020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.990054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.990255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.990291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.990417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.990452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.990704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.990738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.991007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.991041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.991325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.991360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.991546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.991580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.991855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.991889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.992071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.992105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.992381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.992416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.992643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.992677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.992815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.992850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.993045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.993080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.993267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.185 [2024-12-10 05:55:45.993304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.185 qpair failed and we were unable to recover it. 00:30:28.185 [2024-12-10 05:55:45.993628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.993663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.993887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.993921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.994104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.994138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.994405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.994441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.994637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.994670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.994869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.994903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.995024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.995055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.995269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.995304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.995497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.995531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.995810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.995845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.996147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.996184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.996424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.996460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.996771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.996805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.996936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.996969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.997267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.997305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.997433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.997468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.997682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.997719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.998010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.998045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.998200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.998244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.998497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.998537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.998734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.998770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.998988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.999024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.999162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.999195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.999479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.999515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:45.999819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:45.999855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.000134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.000168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.000342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.000374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.000594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.000629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.000920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.000955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.001102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.001139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.001357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.001395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.001647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.001684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.001902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.001942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.002259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.002296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.002439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.002477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.002685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.002733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.002957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.003004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.003231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.003275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.186 [2024-12-10 05:55:46.003529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.186 [2024-12-10 05:55:46.003565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.186 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.003854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.003890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.004032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.004065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.004274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.004308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.004501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.004538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.004736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.004770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.005032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.005067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.005415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.005451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.005702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.005738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.005918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.005954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.006088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.006123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.006329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.006367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.006589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.006623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.006809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.006844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.007002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.007034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.007240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.007277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.007509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.007546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.007774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.007809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.008084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.008121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.008320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.008355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.008562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.008597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.008781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.008824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.009104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.009139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.009336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.009372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.009650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.009686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.009944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.009982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.010280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.010574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.010609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.010945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.011205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.011254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.011538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.011575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.011774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.011809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.012034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.012070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.012277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.012317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.012599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.012636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.012912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.012950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.013179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.013215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.013416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.013451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.013753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.013788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.013996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.187 [2024-12-10 05:55:46.014031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.187 qpair failed and we were unable to recover it. 00:30:28.187 [2024-12-10 05:55:46.014248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.014286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.014565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.014602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.014738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.014773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.014955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.014990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.015195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.015243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.015497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.015532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.015793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.015828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.016023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.016059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.016343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.016380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.016579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.016615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.016795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.016831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.017040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.017074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.017307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.017346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.017530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.017565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.017840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.017875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.018183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.018474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.018726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.018761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.019065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.019100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.019382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.019418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.019700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.019734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.020022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.020064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.020215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.020261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.020460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.020495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.020682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.020719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.020971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.021007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.021263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.021299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.021513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.021549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.021753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.021789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.021992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.022028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.022329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.022367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.022495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.022526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.022746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.022783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.022968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.023003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.023204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.023447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.023482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.023603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.023637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.023856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.023891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.024095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.188 [2024-12-10 05:55:46.024132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.188 qpair failed and we were unable to recover it. 00:30:28.188 [2024-12-10 05:55:46.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.024453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.024708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.024742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.024964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.025190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.025244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.025459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.025494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.025744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.025779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.026075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.026110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.026313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.026349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.026543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.026578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.026840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.026876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.027097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.027132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.027330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.027365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.027638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.027674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.027951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.027985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.028248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.028284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.028581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.028615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.028864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.028899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.029206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.029266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.029483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.029517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.029785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.029819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.030047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.030082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.030272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.030307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.030507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.030558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.030774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.030809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.031059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.031094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.031314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.031498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.031533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.031784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.031820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.032098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.032133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.032458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.032729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.032764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.033044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.033079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.033359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.033396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.033590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.189 [2024-12-10 05:55:46.033625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.189 qpair failed and we were unable to recover it. 00:30:28.189 [2024-12-10 05:55:46.033845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.033879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.034134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.034168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.034466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.034503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.034762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.034797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.035000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.035035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.035234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.035270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.035532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.035749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.035784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.035918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.035953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.036242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.036277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.036397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.036432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.036650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.036685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.036958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.036994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.037180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.037215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.037489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.037525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.037799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.037834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.038055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.038091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.038290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.038325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.038446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.038480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.038791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.038826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.039019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.039172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.039207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.039436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.039471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.039604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.039638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.039836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.039871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.040126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.040161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.040467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.040502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.040689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.040724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.040870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.040911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.041024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.041059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.041312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.041349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.041625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.041661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.041945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.041979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.042170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.042204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.042482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.042517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.042792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.042827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.043032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.043067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.043322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.043358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.043596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.043630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.190 qpair failed and we were unable to recover it. 00:30:28.190 [2024-12-10 05:55:46.043904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.190 [2024-12-10 05:55:46.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.044227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.044264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.044467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.044502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.044708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.044743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.045005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.045040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.045254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.045290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.045516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.045552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.045823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.045857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.046143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.046179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.046458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.046494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.046680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.046714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.046981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.047016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.047131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.047166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.047462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.047497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.047680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.047715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.047905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.047941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.048140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.048175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.048461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.048497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.048796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.048830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.049012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.049236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.049272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.049408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.049443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.049622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.049656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.049874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.049909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.050125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.050160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.050463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.050498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.050683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.050718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.050977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.051011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.051195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.051240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.051510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.051550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.051739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.051773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.052040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.052075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.052345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.052382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.052605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.052640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.052820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.052855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.053039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.053074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.053345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.053380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.053652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.053688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.053977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.054012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.191 qpair failed and we were unable to recover it. 00:30:28.191 [2024-12-10 05:55:46.054286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.191 [2024-12-10 05:55:46.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.054522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.054557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.054808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.054843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.055145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.055181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.055334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.055369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.055637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.055673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.055954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.055989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.056247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.056283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.056414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.056449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.056728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.056762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.057090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.057126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.057421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.057457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.057675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.057710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.057937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.057972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.058262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.058298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.058571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.058606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.058826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.058861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.059094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.059130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.059337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.059373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.059649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.059684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.059962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.059996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.060112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.060147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.060402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.060439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.060738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.060772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.061063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.061097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.061323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.061359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.061640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.061676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.061875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.061909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.062156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.062348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.062386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.062662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.062705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.062886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.062920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.063121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.063156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.063400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.063679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.063714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.064020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.064326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.064383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.064657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.064691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.064957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.064992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.192 qpair failed and we were unable to recover it. 00:30:28.192 [2024-12-10 05:55:46.065175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.192 [2024-12-10 05:55:46.065209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.065445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.065480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.065737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.065771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.066042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.066077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.066208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.066253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.066477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.066513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.066823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.067117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.067421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.067456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.067705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.067740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.067977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.068092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.068126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.068346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.068381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.068641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.068978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.069012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.069163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.069393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.069428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.069653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.069688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.069898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.069934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.070139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.070174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.070395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.070432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.070615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.070649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.070928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.070962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.071186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.071232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.071502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.071536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.071721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.071755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.071984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.072019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.072198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.072246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.072546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.072581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.072766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.072801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.072991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.073026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.073239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.073282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.073535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.073570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.073822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.073857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.074067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.193 [2024-12-10 05:55:46.074103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.193 qpair failed and we were unable to recover it. 00:30:28.193 [2024-12-10 05:55:46.074331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.074367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.074643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.074678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.074887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.074922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.075102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.075136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.075326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.075363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.075640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.075674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.075922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.075956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.076238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.076274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.076550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.076585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.076867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.076901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.077185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.077230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.077415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.077449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.077571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.077605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.077877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.077912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.078137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.078172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.078371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.078406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.078703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.078738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.078940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.078975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.079176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.079211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.079357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.079392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.079587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.079621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.079826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.079862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.080139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.080173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.080478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.080514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.080736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.080771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.081021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.081055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.081240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.081277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.081479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.081656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.081691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.081898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.081933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.082167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.082201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.082424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.082460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.082645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.082680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.082931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.082966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.083232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.083269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.083554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.083590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.083812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.083857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.084050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.084084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.084275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.194 [2024-12-10 05:55:46.084311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.194 qpair failed and we were unable to recover it. 00:30:28.194 [2024-12-10 05:55:46.084567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.084601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.084858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.084893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.085142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.085177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.085494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.085531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.085783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.085818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.086046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.086080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.086280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.086317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.086513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.086548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.086766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.086801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.087020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.087056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.087241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.087277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.087561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.087596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.087737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.087771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.087953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.087988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.088242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.088278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.088557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.088592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.088795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.088830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.089096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.089130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.089434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.089470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.089725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.089760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.089941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.089976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.090264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.090312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.090498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.090551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.090887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.090927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.091241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.091290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.091569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.091606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.091833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.091868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.092052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.092086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.092291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.092327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.092520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.092568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.092778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.092819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.093106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.093142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.093337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.093375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.093632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.093667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.093809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.093843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.094101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.094136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.094331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.094366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.195 [2024-12-10 05:55:46.094647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.195 [2024-12-10 05:55:46.094688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.195 qpair failed and we were unable to recover it. 00:30:28.473 [2024-12-10 05:55:46.095003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.473 [2024-12-10 05:55:46.095066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.473 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.095397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.095450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.095753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.095816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.096116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.096172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.096590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.096672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.096886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.097172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.097207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.097432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.097467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.097760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.097962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.097996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.098268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.098303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.098584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.098618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.098735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.098769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.099081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.099115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.099372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.099407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.099687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.099720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.099943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.099977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.100254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.100289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.100548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.100580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.100877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.100910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.101126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.101160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.101457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.101492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.101685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.101718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.102026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.102059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.102333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.102571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.102604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.102875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.102910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.103171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.103206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.103482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.103517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.103833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.104059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.104093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.104297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.104331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.104516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.104549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.104729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.104763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.104982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.105016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.105233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.105269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.105494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.105530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.105643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.105676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.474 qpair failed and we were unable to recover it. 00:30:28.474 [2024-12-10 05:55:46.105881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.474 [2024-12-10 05:55:46.105914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.106189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.106239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.106530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.106595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.106903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.106940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.107229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.107533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.107567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.107825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.107859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.108044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.108078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.108360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.108396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.108679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.108713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.108964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.108998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.109229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.109265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.109400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.109435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.109637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.109672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.109947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.109981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.110180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.110238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.110526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.110561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.110757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.110792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.110972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.111007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.111280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.111316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.111644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.111678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.111896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.111930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.112230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.112266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.112550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.112584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.112850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.112884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.113180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.113216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.113372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.113407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.113657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.113691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.113990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.114025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.114310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.114347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.114625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.114660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.114849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.114882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.115143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.115178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.115388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.115424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.115541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.115576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.115857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.115891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.116030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.475 [2024-12-10 05:55:46.116064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.475 qpair failed and we were unable to recover it. 00:30:28.475 [2024-12-10 05:55:46.116334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.116369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.116556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.116591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.116771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.116806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.117078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.117113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.117310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.117347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.117469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.117504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.117688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.117722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.117923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.117958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.118250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.118487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.118522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.118817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.118853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.119116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.119151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.119339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.119376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.119669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.119703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.119907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.119942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.120137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.120172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.120394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.120429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.120579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.120614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.120736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.120771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.120973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.121008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.121213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.121262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.121543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.121578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.121717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.121751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.121946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.121981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.122259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.122297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.122570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.122605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.122863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.122897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.123195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.123241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.123541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.123576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.123862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.123896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.124121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.124157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.124348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.124383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.124601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.124636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.124916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.124951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.125240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.125277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.125476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.125510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.125695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.125729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.476 [2024-12-10 05:55:46.126006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.476 [2024-12-10 05:55:46.126040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.476 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.126307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.126344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.126636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.126671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.126865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.126899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.127201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.127247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.127455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.127490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.127753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.127788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.128107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.128427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.128626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.128661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.128796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.128831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.129049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.129084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.129281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.129317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.129522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.129556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.129807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.129841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.130145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.130181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.130482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.130518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.130811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.130846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.131081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.131115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.131308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.131343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.131540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.131860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.131895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.132170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.132205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.132441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.132476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.132756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.132791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.133092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.133126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.133336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.133372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.133590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.133625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.133900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.134241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.134276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.134533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.134567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.134787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.134822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.135051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.135086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.135336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.135372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.135678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.135712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.135960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.135995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.136178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.136213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.136503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.136543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.477 qpair failed and we were unable to recover it. 00:30:28.477 [2024-12-10 05:55:46.136772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.477 [2024-12-10 05:55:46.136807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.137010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.137045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.137345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.137381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.137641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.137676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.137880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.137914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.138198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.138242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.138465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.138500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.138616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.138650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.138834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.138870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.139000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.139036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.139239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.139283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.139473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.139508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.139723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.139757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.140009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.140044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.140242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.140279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.140552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.140588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.140868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.140903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.141184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.141230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.141508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.141543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.141816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.141851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.142142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.142177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.142392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.142428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.142648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.142682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.142959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.142993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.143233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.143269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.143548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.143583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.143859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.143893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.144180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.144216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.144414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.144448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.144696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.144730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.145013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.145048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.145247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.478 [2024-12-10 05:55:46.145284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.478 qpair failed and we were unable to recover it. 00:30:28.478 [2024-12-10 05:55:46.145491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.145525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.145800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.145835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.146116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.146151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.146432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.146468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.146791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.146825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.147058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.147093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.147379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.147415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.147690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.147725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.147919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.147953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.148138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.148173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.148460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.148496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.148756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.148791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.149012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.149046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.149239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.149275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.149552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.149588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.149812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.149846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.149965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.149999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.150315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.150353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.150556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.150603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.150865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.150900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.151083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.151117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.151391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.151428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.151719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.151767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.152086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.152135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.152376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.152414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.152620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.152657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.152944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.152980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.153254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.153290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.153571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.153606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.153813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.153848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.154146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.154181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.154401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.154438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.154723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.154758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.154956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.154991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.155173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.155208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.155498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.155534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.479 qpair failed and we were unable to recover it. 00:30:28.479 [2024-12-10 05:55:46.155731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.479 [2024-12-10 05:55:46.155766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.156019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.156054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.156357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.156393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.156591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.156626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.156838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.156873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.157003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.157037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.157257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.157292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.157548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.157731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.157766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.158041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.158076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.158353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.158389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.158613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.158648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.158843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.158877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.158990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.159206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.159251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.159524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.159559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.159771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.159805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.160082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.160117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.160397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.160433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.160570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.160605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.160758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.160793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.161023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.161360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.161401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.161585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.161902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.161937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.162233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.162269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.162533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.162568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.162827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.162862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.163064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.163097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.163280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.163317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.163526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.163561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.163754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.163789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.164006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.164042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.164292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.164328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.164598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.164633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.164819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.165126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.165161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.165461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.165498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.480 qpair failed and we were unable to recover it. 00:30:28.480 [2024-12-10 05:55:46.165699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.480 [2024-12-10 05:55:46.165734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.165987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.166022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.166319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.166355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.166609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.166643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.166949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.166983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.167170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.167205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.167402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.167438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.167714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.167749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.168007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.168042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.168253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.168289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.168540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.168579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.168716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.168751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.169024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.169057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.169270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.169305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.169535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.169570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.169852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.169887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.170068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.170103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.170409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.170446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.170628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.170662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.170918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.170953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.171240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.171276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.171549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.171584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.171840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.171875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.172160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.172382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.172425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.172621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.172655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.172906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.172940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.173141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.173175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.173459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.173495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.173770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.173805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.173941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.173975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.174176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.174211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.174427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.174462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.174716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.174749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.175005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.175040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.175291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.175327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.175631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.481 [2024-12-10 05:55:46.175666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.481 qpair failed and we were unable to recover it. 00:30:28.481 [2024-12-10 05:55:46.175871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.175904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.176131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.176164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.176379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.176415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.176669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.176704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.176886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.176919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.177169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.177203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.177393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.177428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.177682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.177716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.177895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.177930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.178259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.178295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.178574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.178609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.178792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.178827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.179093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.179127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.179405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.179442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.179724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.179758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.180068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.180315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.180351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.180616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.180902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.180937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.181240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.181274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.181465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.181499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.181705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.182012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.182046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.182271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.182307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.182592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.182627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.182810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.182846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.183048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.183082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.183366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.183409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.183649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.183684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.183946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.183980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.184280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.184315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.184464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.184499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.184772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.184807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.184936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.184971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.185172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.185207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.185351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.185387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.185667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.185701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.482 [2024-12-10 05:55:46.185885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.482 [2024-12-10 05:55:46.185919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.482 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.186118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.186152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.186339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.186374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.186562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.186596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.186856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.186890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.187140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.187174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.187397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.187432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.187684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.187718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.188014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.188048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.188264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.188301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.188487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.188520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.188774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.188809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.188938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.188972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.189263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.189299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.189569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.189604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.189864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.189899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.190195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.190252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.190511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.190547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.190799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.190834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.191129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.191163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.191414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.191450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.191761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.191794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.192076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.192111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.192298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.192580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.192614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.192873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.192907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.193180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.193214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.193413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.193448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.193643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.193677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.193821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.193855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.194138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.194178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.194338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.194374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.194623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.194657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.194859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.194893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.483 qpair failed and we were unable to recover it. 00:30:28.483 [2024-12-10 05:55:46.195165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.483 [2024-12-10 05:55:46.195200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.195414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.195450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.195727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.195762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.195949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.195984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.196247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.196467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.196501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.196730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.196766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.196970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.197004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.197282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.197318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.197501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.197535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.197798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.197833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.198099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.198135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.198392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.198427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.198726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.198760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.198964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.198998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.199191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.199237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.199540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.199574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.199763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.199797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.200095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.200130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.200337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.200372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.200649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.200683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.200958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.200992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.201292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.201327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.201538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.201573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.201879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.201914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.202113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.202148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.202404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.202440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.202644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.202679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.202802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.202836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.203044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.203079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.203190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.203234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.203371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.203404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.203546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.203582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.203855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.203889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.204171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.204205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.204493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.204528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.204709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.204754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.204935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.204970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.205080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.205114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.484 [2024-12-10 05:55:46.205304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.484 [2024-12-10 05:55:46.205341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.484 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.205506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.205634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.205667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.205947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.205982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.206125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.206159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.206372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.206407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.206710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.206745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.206925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.206960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.207256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.207291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.207494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.207528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.207804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.207838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.208023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.208059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.208255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.208290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.208568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.208603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.208880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.208915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.209201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.209247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.209543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.209738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.209772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.210025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.210059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.210338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.210373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.210628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.210663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.210905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.210940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.211243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.211278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.211489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.211523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.211793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.211828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.212116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.212151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.212425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.212461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.212735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.212769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.213075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.213110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.213405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.213441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.213669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.213703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.213884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.213918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.214182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.214241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.214498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.214534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.214802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.214836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.215018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.215052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.215328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.215365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.215634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.215674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.215902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.215937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.485 [2024-12-10 05:55:46.216226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.485 [2024-12-10 05:55:46.216262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.485 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.216466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.216501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.216684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.216718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.216897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.216931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.217128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.217163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.217293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.217328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.217463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.217496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.217638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.217673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.217922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.217957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.218256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.218292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.218573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.218608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.218802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.218837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.219027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.219061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.219263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.219300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.219576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.219610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.219791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.219826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.219970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.220005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.220281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.220317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.220649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.220911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.220945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.221088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.221123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.221401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.221436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.221692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.221725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.221921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.221954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.222260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.222297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.222560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.222595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.222810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.222843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.223094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.223129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.223429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.223466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.223753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.223788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.223990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.224025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.224137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.224172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.224364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.224399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.224670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.224704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.224848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.224882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.225153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.225187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.225390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.225425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.225678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.486 [2024-12-10 05:55:46.225712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.486 qpair failed and we were unable to recover it. 00:30:28.486 [2024-12-10 05:55:46.225913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.225955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.226228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.226263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.226570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.226604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.226786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.226820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.227016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.227051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.227253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.227289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.227543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.227577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.227765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.227800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.228053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.228089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.228305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.228341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.228481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.228515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.228706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.228739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.229016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.229052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.229238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.229275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.229467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.229502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.229752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.229786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.230068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.230103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.230239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.230275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.230468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.230503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.230779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.230815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.231004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.231039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.231299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.231335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.231537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.231572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.231840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.231875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.232129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.232165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.232388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.232699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.232733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.232868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.232902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.233088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.233123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.233271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.233306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.233508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.233541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.233725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.233759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.234033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.234068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.234343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.234378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.487 qpair failed and we were unable to recover it. 00:30:28.487 [2024-12-10 05:55:46.234632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.487 [2024-12-10 05:55:46.234665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.234918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.234952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.235146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.235181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.235482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.235518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.235799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.235834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.236133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.236368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.236687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.236721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.236999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.237033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.237319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.237355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.237536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.237571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.237789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.238011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.238045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.238245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.238281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.238562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.238597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.238875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.238910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.239165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.239200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.239356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.239391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.239706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.239821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.240112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.240146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.240332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.240368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.240670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.240705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.240849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.240884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.241007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.241041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.241176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.241210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.241499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.241534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.241754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.241789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.242078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.242113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.242319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.242354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.242607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.242641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.242835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.242870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.243052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.243086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.243276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.243313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.243586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.243621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.243822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.243857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.244108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.488 [2024-12-10 05:55:46.244143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.488 qpair failed and we were unable to recover it. 00:30:28.488 [2024-12-10 05:55:46.244274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.244311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.244584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.244619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.244944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.244978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.245248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.245284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.245512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.245548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.245801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.245837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.246092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.246129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.246385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.246422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.246699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.246734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.246992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.247031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.247162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.247194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.247494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.247530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.247782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.247816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.248010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.248044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.248302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.248338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.248532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.248566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.248843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.248878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.249134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.249168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.249393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.249429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.249626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.249662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.249883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.249918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.250101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.250137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.250397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.250435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.250642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.250679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.250958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.250991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.251253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.251289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.251489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.251524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.251725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.251760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.251984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.252019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.252281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.252317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.252599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.252635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.252848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.252883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.253127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.489 [2024-12-10 05:55:46.253329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.489 [2024-12-10 05:55:46.253368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.489 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.253644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.253679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.253884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.253919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.254122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.254371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.254406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.254615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.254649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.254906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.254943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.255257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.255293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.255491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.255523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.255718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.255977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.256010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.256166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.256202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.256433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.256470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.256723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.256764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.257034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.257069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.257269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.257305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.257493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.257541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.257762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.257798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.257939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.257970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.258186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.258231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.258440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.258476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.258728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.258764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.259024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.259059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.259273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.259308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.259556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.259592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.259728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.259762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.260015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.260050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.260183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.260215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.260488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.260522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.260813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.260849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.261149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.261184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.261383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.261419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.261723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.261757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.262042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.262078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.262348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.262384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.262615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.262648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.262955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.262991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.490 qpair failed and we were unable to recover it. 00:30:28.490 [2024-12-10 05:55:46.263250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.490 [2024-12-10 05:55:46.263285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.263533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.263570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.263847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.263881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.264086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.264120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.264343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.264378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.264505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.264537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.264830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.264870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.265139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.265174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.265462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.265497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.265631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.265666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.265882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.265918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.266110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.266144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.266445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.266482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.266713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.266749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.266946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.266980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.267175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.267209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.267435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.267471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.267603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.267634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.267889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.267924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.268049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.268083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.268368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.268404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.268550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.268584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.268768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.268804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.269012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.269046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.269252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.269288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.269497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.269531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.269786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.269821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.270074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.270110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.270233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.270269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.270463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.270497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.270695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.270728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.270850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.270883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.271063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.271098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.271379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.271418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.271600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.271635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.271916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.271951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.272143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.272178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.272343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.491 [2024-12-10 05:55:46.272381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.491 qpair failed and we were unable to recover it. 00:30:28.491 [2024-12-10 05:55:46.272615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.272651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.272871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.272907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.273123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.273159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.273378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.273415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.273638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.273672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.273942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.273977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.274182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.274215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.274456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.274491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.274745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.274784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.275068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.275105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.275329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.275364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.275668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.275704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.275959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.275995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.276125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.276158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.276420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.276455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.276637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.276673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.276870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.276904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.277107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.277142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.277362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.277397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.277540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.277573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.277824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.277860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.278071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.278108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.278316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.278352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.278563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.278598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.278801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.278835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.279124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.279158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.279378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.279415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.279700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.279737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.279958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.279992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.280271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.280308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.280539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.492 [2024-12-10 05:55:46.280796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.492 [2024-12-10 05:55:46.280831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.492 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.281042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.281076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.281264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.281299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.281565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.281600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.281792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.281827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.281974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.282008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.282285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.282320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.282521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.282555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.282834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.282869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.283064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.283098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.283372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.283408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.283625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.283663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.283915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.283951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.284274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.284311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.284589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.284626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.284910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.284945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.285229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.285265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.285470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.285510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.285705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.285740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.286009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.286043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.286352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.286389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.286641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.286677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.286955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.286990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.287279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.287315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.287588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.287624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.287902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.287937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.288150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.288186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.288340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.288377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.288649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.288685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.288873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.288907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.289159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.289194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.289495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.289532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.289796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.289830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.290052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.290087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.290363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.290399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.290605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.290640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.290820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.290854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.291059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.291094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.291371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.291410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.291532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.493 [2024-12-10 05:55:46.291566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.493 qpair failed and we were unable to recover it. 00:30:28.493 [2024-12-10 05:55:46.291754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.291791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.291924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.291959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.292142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.292176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.292414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.292452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.292644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.292680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.292907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.292944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.293136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.293173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.293331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.293370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.293621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.293656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.293845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.293881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.294064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.294099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.294383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.294419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.294693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.294728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.294957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.294995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.295275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.295311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.295575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.295612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.295905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.295939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.296208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.296261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.296541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.296576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.296865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.296899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.297125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.297160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.297356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.297391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.297695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.297730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.297933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.297967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.298184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.298230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.298426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.298462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.298763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.298799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.299063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.299099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.299261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.299299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.299599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.299634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.299829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.299863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.300052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.300085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.300341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.300376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.300583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.300616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.300826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.300861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.301056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.301090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.301295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.301332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.301514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.301547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.301735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.494 [2024-12-10 05:55:46.301771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.494 qpair failed and we were unable to recover it. 00:30:28.494 [2024-12-10 05:55:46.301952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.301987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.302242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.302277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.302417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.302450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.302710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.302745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.302883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.302913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.303193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.303247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.303429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.303461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.303603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.303634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.303837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.303871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.304159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.304470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.304506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.304783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.304818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.305041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.305074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.305334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.305372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.305574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.305609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.305797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.305831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.306110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.306143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.306409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.306651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.306946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.306981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.307273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.307308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.307600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.307634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.307850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.307886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.308161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.308197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.308420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.308456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.308637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.308674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.308867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.308903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.309177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.309213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.309518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.309553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.309735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.309772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.310041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.310075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.310259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.310295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.310440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.310471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.310605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.310636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.310936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.310971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.311251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.311287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.311563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.495 [2024-12-10 05:55:46.311596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.495 qpair failed and we were unable to recover it. 00:30:28.495 [2024-12-10 05:55:46.311883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.311917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.312114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.312148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.312331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.312367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.312617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.312650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.312909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.312945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.313068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.313102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.313381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.313416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.313626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.313659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.313854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.313888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.314156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.314190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.314482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.314518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.314726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.314760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.315059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.315095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.315296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.315331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.315606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.315641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.315897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.315931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.316185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.316229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.316511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.316545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.316736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.316770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.317044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.317079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.317380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.317414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.317623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.317660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.317848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.317881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.318163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.318199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.318465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.318498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.318702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.318736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.318929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.318963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.319099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.319132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.319411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.319447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.319712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.319745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.319938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.319972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.320238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.320272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.320476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.320513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.320654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.320689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.320896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-10 05:55:46.320931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.496 qpair failed and we were unable to recover it. 00:30:28.496 [2024-12-10 05:55:46.321238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.321277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.321544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.321578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.321779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.321813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.321942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.321973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.322256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.322292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.322564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.322597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.322814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.322850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.323035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.323071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.323250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.323286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.323471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.323506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.323712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.323747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.323932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.323966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.324246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.324281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.324563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.324598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.324878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.324912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.325168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.325200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.325479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.325515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.325767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.325801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.326026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.326059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.326321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.326356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.326660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.326692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.326891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.326924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.327243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.327278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.327475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.327509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.327693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.327727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.327914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.327948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.328243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.328284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.328570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.328604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.328806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.328842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.329097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.329133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.329341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.329377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.329643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-10 05:55:46.329679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.497 qpair failed and we were unable to recover it. 00:30:28.497 [2024-12-10 05:55:46.329835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.329870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.330122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.330156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.330399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.330435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.330707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.330741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.330970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.331004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.331225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.331260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.331454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.331487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.331678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.331712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.331997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.332030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.332167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.332203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.332402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.332438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.332651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.332684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.332873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.332907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.333183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.333230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.333530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.333564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.333846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.333882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.334070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.334104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.334310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.334347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.334494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.334528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.334738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.334772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.334995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.335028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.335249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.335287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.335538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.335572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.335850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.335887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.336086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.336253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.336288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.336562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.336595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.336802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.336837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.337040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.337076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.337260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.337296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.337443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.337479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.337680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.337718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.337971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.338005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.338259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.338296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.338532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.338572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.338758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.338792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.339044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.339078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.339357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-10 05:55:46.339393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.498 qpair failed and we were unable to recover it. 00:30:28.498 [2024-12-10 05:55:46.339588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.339624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.339930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.339963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.340257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.340294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.340584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.340618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.340892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.340927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.341201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.341248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.341389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.341423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.341674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.341708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.342028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.342064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.342274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.342308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.342519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.342554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.342737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.342772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.342963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.342996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.343238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.343272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.343472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.343505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.343805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.343838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.344043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.344080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.344263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.344298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.344574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.344608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.344828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.344864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.345139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.345173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.345437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.345475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.345778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.345814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.346099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.346133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.346270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.346307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.346564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.346598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.346772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.347053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.347087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.347344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.347380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.347575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.347610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.347884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.347920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.348195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.348240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.499 qpair failed and we were unable to recover it. 00:30:28.499 [2024-12-10 05:55:46.348531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.499 [2024-12-10 05:55:46.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.348715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.348750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.348950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.348984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.349151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.349445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.349627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.349662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.349772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.349805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.349944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.349980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.350254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.350292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.350495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.350529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.350782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.350819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.350952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.350987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.351203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.351249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.351526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.351560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.351767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.351801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.352003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.352039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.352168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.352201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.352465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.352502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.352763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.352800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.352949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.352984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.353175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.353208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.353456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.353491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.353613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.353650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.353776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.353810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.354009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.354045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.354170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.354203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.354502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.354537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.354787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.354820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.355129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.355163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.355442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.355477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.355697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.355732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.355868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.355902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.356127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.356160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.356295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.356331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.356525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.356558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.356744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.356780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.356932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.356966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.500 qpair failed and we were unable to recover it. 00:30:28.500 [2024-12-10 05:55:46.357165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.500 [2024-12-10 05:55:46.357198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.357432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.357466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.357765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.357798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.357995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.358031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.358241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.358277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.358482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.358516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.358773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.358806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.358994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.359035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.359312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.359350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.359626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.359659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.359874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.359910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.360188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.360231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.360509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.360543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.360745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.360780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.361037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.361075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.361337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.361374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.361652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.361687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.361840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.361874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.362056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.362090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.362346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.362382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.362514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.362550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.362815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.362849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.363000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.363035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.363291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.363328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.363535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.363570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.363843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.363879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.364113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.364146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.364348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.364384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.364690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.364724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.364981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.365015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.365212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.365261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.365520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.365557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.365741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.365776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.501 qpair failed and we were unable to recover it. 00:30:28.501 [2024-12-10 05:55:46.365959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.501 [2024-12-10 05:55:46.365992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.366228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.366264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.366567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.366601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.366854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.366887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.367197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.367243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.367426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.367459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.367659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.367693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.367894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.368156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.368191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.368338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.368373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.368563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.368598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.368901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.368934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.369188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.369254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.369459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.369493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.369774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.369816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.369945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.369982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.370256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.370291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.370495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.370530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.370732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.370768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.370987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.371021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.371239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.371273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.371479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.371512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.371767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.371801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.371990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.372027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.372301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.372337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.372526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.372561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.372831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.372866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.373049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.373084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.373244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.373279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.373572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.373606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.373877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.373910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.374182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.374242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.374374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.374409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.374714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.374749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.502 qpair failed and we were unable to recover it. 00:30:28.502 [2024-12-10 05:55:46.374948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.502 [2024-12-10 05:55:46.374984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.375243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.375280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.375475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.375508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.375709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.375928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.375961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.376254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.376290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.376515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.376562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.376826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.376863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.377049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.377084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.377196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.377256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.377465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.377500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.377750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.377787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.378082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.378115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.378404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.378438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.378697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.378856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.378888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.379163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.379197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.379408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.379443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.379675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.379711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.379906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.379941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.380167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.380208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.380364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.380399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.380532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.380565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.380688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.380723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.380854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.380889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.381076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.381110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.381301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.381335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.381619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.381653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.381919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.381956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.382076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.382112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.382322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.382358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.382548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.382584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.382784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.382818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.382998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.383030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.383308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.383346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.383599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.383634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.383818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.383851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.503 [2024-12-10 05:55:46.384055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.503 [2024-12-10 05:55:46.384090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.503 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.384235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.384269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.384542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.384781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.384817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.385023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.385057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.385262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.385298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.385506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.385543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.385666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.385700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.385893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.385926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.386038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.386070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.386333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.386377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.386576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.386610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.386906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.386941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.387196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.387241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.387428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.387463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.387607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.387644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.387770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.387804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.387938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.387972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.388253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.388289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.388478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.388514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.388711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.388746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.389017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.389053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.389300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.389336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.389466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.389506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.389710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.389746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.390020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.390055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.390199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.390249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.390531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.390565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.390763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.390798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.391058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.391091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.391239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.391276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.391408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.391441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.391691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.391724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.392026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.392060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.392186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.392231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.392414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.392448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.392656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.392689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.392954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.392989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.504 [2024-12-10 05:55:46.393139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.504 [2024-12-10 05:55:46.393173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.504 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.393379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.393418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.393611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.393646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.393772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.393807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.394029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.394063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.394260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.394296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.394569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.394602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.394784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.394817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.395092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.395125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.395394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.395429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.395633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.395666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.395802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.395839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.396044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.396083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.396380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.396414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.396615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.396648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.396832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.396866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.397077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.397112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.397313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.397348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.397573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.397606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.397856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.397893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.398152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.398187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.398457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.398492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.398744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.398778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.399020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.399054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.399281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.399317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.399540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.399574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.399794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.399830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.400132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.400166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.400370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.400585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.400619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.400822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.400855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.401055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.401090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.401341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.401376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.401572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.401606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.401886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.401922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.505 qpair failed and we were unable to recover it. 00:30:28.505 [2024-12-10 05:55:46.402058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.505 [2024-12-10 05:55:46.402092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.402309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.402344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.402600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.402633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.402937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.402970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.403182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.403216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.403507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.403541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.403819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.403854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.403997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.404042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.404355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.404419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.404710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.404745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.404969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.405005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.405262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.405299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.405640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.405917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.405955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.406148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.406182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.406415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.406474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.406721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.406758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.406980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.407023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.407304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.407341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.407482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.407515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.407667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.407701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.407892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.407925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.506 [2024-12-10 05:55:46.408186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.506 [2024-12-10 05:55:46.408253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.506 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.408547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.408604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.408880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.408935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.409277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.409342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.409663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.409893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.409940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.410252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.410313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.410665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.410958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.410997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.411164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.411202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.411404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.411446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.411650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.411696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.412006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.412043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.412277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.412316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.412633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.412672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.783 qpair failed and we were unable to recover it. 00:30:28.783 [2024-12-10 05:55:46.412950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.783 [2024-12-10 05:55:46.412987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.413273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.413326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.413576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.413613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.413898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.413936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.414120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.414155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.414457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.414496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.414709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.414743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.415036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.415072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.415243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.415453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.415487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.415616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.415650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.415782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.415818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.416077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.416111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.416297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.416334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.416483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.416518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.416790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.416827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.417152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.417190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.417430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.417466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.417606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.417644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.417943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.417978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.418193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.418263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.418570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.418605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.418787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.418821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.419092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.419128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.419317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.419354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.419584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.419621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.419844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.419881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.420116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.420152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.420350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.420386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.420586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.420622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.420907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.420942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.421162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.421197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.421408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.421444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.421713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.421749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.421984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.422019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.422331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.422369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.422635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.422672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.422867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.423061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.423098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.784 [2024-12-10 05:55:46.423307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.784 [2024-12-10 05:55:46.423348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.784 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.423492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.423528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.423735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.423772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.423966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.424003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.424262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.424301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.424557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.424592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.424830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.425064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.425096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.425377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.425410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.425613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.425644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.425843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.425875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.426128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.426164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.426472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.426505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.426784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.426815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.427102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.427136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.427338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.427371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.427628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.427659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.427862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.427893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.428037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.428067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.428192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.428235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.428437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.428468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.428618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.428657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.428955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.428988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.429202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.429255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.429471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.429503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.429722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.429757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.430017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.430050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.430357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.430393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.430554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.430590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.430737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.430776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.430988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.431024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.431235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.431274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.431412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.431446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.431668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.431703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.432005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.432046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.432245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.432281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.432489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.432524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.785 [2024-12-10 05:55:46.432642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.785 [2024-12-10 05:55:46.432677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.785 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.432992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.433028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.433244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.433280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.433475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.433512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.433653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.433690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.434005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.434039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.434232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.434266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.434464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.434501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.434751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.434789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.434980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.435013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.435284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.435321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.435532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.435566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.435758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.435792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.436065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.436100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.436298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.436334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.436591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.436625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.436774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.436809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.437013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.437049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.437359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.437397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.437581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.437614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.437823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.437859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.438009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.438044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.438278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.438315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.438525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.438559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.438710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.438749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.438939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.438973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.439163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.439196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.439419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.439456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.439596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.439631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.439838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.439875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.440165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.440354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.440391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.440616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.440650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.440903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.440937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.441243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.441279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.441538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.441573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.441769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.441804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.442094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.442129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.442323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.442362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.442548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.786 [2024-12-10 05:55:46.442581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.786 qpair failed and we were unable to recover it. 00:30:28.786 [2024-12-10 05:55:46.442788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.442823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.443014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.443050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.443274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.443310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.443500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.443537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.443720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.443753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.443957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.443990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.444189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.444236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.444418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.444454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.444658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.444694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.444882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.444918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.445195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.445409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.445443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.445581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.445617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.445797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.445832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.446085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.446120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.446441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.446479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.446625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.446659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.446771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.446805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.447102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.447138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.447402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.447439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.447622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.447657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.447956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.447992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.448250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.448287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.448487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.448520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.448655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.448698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.448946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.448980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.449266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.449303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.449501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.449535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.449762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.449797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.450077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.450114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.450351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.450387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.450596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.450630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.450902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.450937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.451243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.451279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.451469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.451502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.451782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.451816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.452073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.452108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.452398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.787 [2024-12-10 05:55:46.452434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.787 qpair failed and we were unable to recover it. 00:30:28.787 [2024-12-10 05:55:46.452595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.452632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.452861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.452895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.453099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.453133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.453348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.453386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.453598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.453632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.453812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.453848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.454052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.454086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.454345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.454381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.454579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.454613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.454816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.454850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.454981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.455015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.455292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.455327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.455554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.455588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.455801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.456021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.456056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.456359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.456395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.456609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.456643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.456796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.456830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.457052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.457086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.457369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.457405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.457656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.457690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.458041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.458075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.458261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.458297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.458480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.458514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.458785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.458819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.459034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.459069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.459274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.459315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.459597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.459937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.459972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.460175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.460209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.460456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.460491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.460644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.788 [2024-12-10 05:55:46.460678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.788 qpair failed and we were unable to recover it. 00:30:28.788 [2024-12-10 05:55:46.460958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.460992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.461194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.461257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.461541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.461576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.461831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.461865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.462177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.462249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.462495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.462810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.462845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.463064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.463099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.463385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.463421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.463702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.463737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.464017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.464051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.464258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.464294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.464570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.464605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.464800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.464834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.465083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.465117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.465344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.465565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.465598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.465744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.465778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.465980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.466015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.466321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.466357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.466638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.466672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.466820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.466854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.467115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.467150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.467336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.467372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.467559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.467591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.467772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.468061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.468097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.468343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.468380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.468580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.468614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.468794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.468828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.469162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.469370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.469405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.469686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.469720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.469937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.469972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.470114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.470154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.470350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.470387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.470569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.470602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.789 [2024-12-10 05:55:46.470806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.789 [2024-12-10 05:55:46.470840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.789 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.471023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.471058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.471268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.471304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.471575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.471609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.471825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.471859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.472038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.472074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.472349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.472386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.472568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.472601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.472895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.472929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.473118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.473151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.473452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.473488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.473630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.473664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.473937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.473971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.474254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.474290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.474569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.474602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.474885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.474922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.475199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.475259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.475508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.475542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.475811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.475845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.476098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.476384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.476421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.476724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.476760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.477055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.477090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.477362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.477398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.477532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.477567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.477759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.477793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.478070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.478105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.478410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.478447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.478694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.478728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.478988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.479022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.479324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.479359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.479619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.479653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.479952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.479987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.480191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.480237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.480454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.480488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.480693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.480727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.481003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.481037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.481308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.481350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.481641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.481675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.481934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.790 [2024-12-10 05:55:46.481969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.790 qpair failed and we were unable to recover it. 00:30:28.790 [2024-12-10 05:55:46.482263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.482299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.482506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.482541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.482763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.482798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.482990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.483024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.483294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.483331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.483612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.483645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.483839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.483874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.484128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.484163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.484474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.484509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.484763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.484797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.484980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.485015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.485215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.485281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.485492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.485526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.485836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.485870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.486142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.486177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.486403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.486439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.486739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.486774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.486902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.486936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.487186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.487232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.487513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.487547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.487749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.487784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.487968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.488002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.488278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.488314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.488466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.488500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.488760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.488794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.489009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.489043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.489233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.489269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.489545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.489578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.489772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.489806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.490056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.490091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.490228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.490264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.490529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.490564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.490758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.490792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.491069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.491103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.491412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.491448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.491704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.491739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.492040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.492074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.791 [2024-12-10 05:55:46.492286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.791 [2024-12-10 05:55:46.492328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.791 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.492550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.492584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.492768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.492804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.493070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.493104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.493327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.493363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.493572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.493606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.493857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.493892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.494085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.494121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.494365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.494668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.494702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.494997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.495032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.495320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.495356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.495628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.495664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.495868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.495902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.496161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.496196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.496495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.496529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.496818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.496853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.497056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.497091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.497285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.497321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.497600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.497634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.497913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.497947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.498090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.498125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.498378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.498417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.498614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.498648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.498942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.498977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.499160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.499195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.499486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.499524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.499781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.499816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.500093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.500130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.500386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.500422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.500634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.500668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.500923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.500958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.501169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.501206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.792 qpair failed and we were unable to recover it. 00:30:28.792 [2024-12-10 05:55:46.501521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.792 [2024-12-10 05:55:46.501556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.501833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.501868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.501985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.502019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.502292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.502329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.502476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.502511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.502694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.502729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.502847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.502882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.503144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.503185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.503482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.503519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.503773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.503808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.504010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.504043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.504249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.504285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.504556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.504591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.504726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.504760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.504911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.504947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.505230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.505269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.505530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.505564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.505760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.505794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.505980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.506017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.506292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.506329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.506526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.506767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.506801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.506924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.506958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.507255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.507291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.507427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.507462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.507741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.507775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.508069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.508104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.508432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.508705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.508739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.508936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.508971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.509232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.509267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.509543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.509580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.509809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.509846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.510148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.510183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.510459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.510495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.510690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.510724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.511028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.511063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.511320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.511358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.793 qpair failed and we were unable to recover it. 00:30:28.793 [2024-12-10 05:55:46.511545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.793 [2024-12-10 05:55:46.511579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.511858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.511894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.512152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.512188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.512319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.512354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.512610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.512645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.512948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.512984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.513233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.513270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.513477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.513513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.513645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.513679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.513809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.513848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.513973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.514008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.514270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.514305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.514560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.514594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.514735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.514770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.515024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.515059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.515264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.515302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.515488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.515522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.515710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.515747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.515944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.515979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.516243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.516561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.516597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.516888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.516924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.517073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.517108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.517301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.517339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.517591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.517625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.517877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.517914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.518122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.518157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.518419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.518455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.518736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.518769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.518960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.518995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.519243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.519279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.519562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.519595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.519784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.519819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.520003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.520041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.520238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.520274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.520458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.520493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.520708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.520743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.521012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.521049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.521308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.521345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.794 qpair failed and we were unable to recover it. 00:30:28.794 [2024-12-10 05:55:46.521586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.794 [2024-12-10 05:55:46.521620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.521827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.521862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.522141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.522175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.522387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.522423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.522614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.522650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.522918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.522953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.523206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.523267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.523545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.523579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.523773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.523809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.524019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.524055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.524308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.524353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.524637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.524672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.524939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.524975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.525159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.525192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.525401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.525436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.525692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.525727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.525942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.525976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.526228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.526265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.526464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.526499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.526765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.526799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.527076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.527112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.527397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.527435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.527707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.527742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.527955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.527991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.528111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.528144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.528399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.528436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.528624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.528657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.528934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.528970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.529249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.529287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.529543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.529576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.529856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.529891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.530030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.530066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.530256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.530299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.530531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.530566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.530862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.530896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.531015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.531052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.531255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.531292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.531478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.531517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.531782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.531819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.795 qpair failed and we were unable to recover it. 00:30:28.795 [2024-12-10 05:55:46.531954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.795 [2024-12-10 05:55:46.531988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.532271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.532309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.532609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.532642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.532832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.532866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.533102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.533138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.533357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.533393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.533677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.533711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.533932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.533966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.534198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.534245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.534459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.534493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.534714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.535004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.535039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.535348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.535383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.535635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.535670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.535921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.535957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.536158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.536194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.536397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.536434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.536714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.536750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.536935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.536971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.537158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.537191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.537400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.537435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.537625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.537661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.537866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.537900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.538084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.538118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.538376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.538413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.538625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.538658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.538871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.538906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.539161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.539197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.539487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.539521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.539818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.539854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.540131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.540167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.540498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.540532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.540725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.540759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.541012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.541047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.541350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.541385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.541508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.541543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.541813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.541847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.796 qpair failed and we were unable to recover it. 00:30:28.796 [2024-12-10 05:55:46.542117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.796 [2024-12-10 05:55:46.542151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.542464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.542506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.542785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.542818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.543094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.543128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.543313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.543348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.543642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.543817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.543852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.544106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.544140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.544370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.544405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.544657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.544691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.544930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.545131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.545165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.545403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.545438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.545706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.545739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.545920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.545955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.546144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.546178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.546470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.546505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.546696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.546730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.546955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.546988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.547254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.547289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.547559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.547593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.547847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.547881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.548177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.548212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.548442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.548475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.548594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.548629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.548820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.548855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.549129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.549164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.549449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.549485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.549707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.549741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.549931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.549965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.550178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.550212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.550411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.550445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.797 [2024-12-10 05:55:46.550646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.797 [2024-12-10 05:55:46.550681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.797 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.550813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.550847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.550983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.551016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.551269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.551306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.551502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.551536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.551748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.551782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.551974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.552009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.552197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.552242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.552538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.552571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.552850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.552890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.553106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.553141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.553419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.553454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.553738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.553773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.553962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.553997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.554179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.554213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.554425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.554458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.554737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.554771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.555050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.555084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.555285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.555321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.555574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.555607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.555830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.555865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.556091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.556125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.556353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.556389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.556660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.556693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.556958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.556993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.557119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.557153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.557335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.557369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.557622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.557656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.557878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.557912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.558184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.558229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.558534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.558569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.558716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.558749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.558876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.558911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.559187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.559234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.559491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.559525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.559818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.559853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.560127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.798 [2024-12-10 05:55:46.560162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.798 qpair failed and we were unable to recover it. 00:30:28.798 [2024-12-10 05:55:46.560449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.560485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.560762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.560796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.561081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.561115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.561321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.561357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.561640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.561673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.561952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.561986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.562268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.562302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.562582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.562615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.562873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.562907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.563161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.563195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.563485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.563519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.563723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.563758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.563904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.563943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.564132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.564166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.564361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.564396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.564617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.564651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.564959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.565153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.565188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.565482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.565517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.565798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.565832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.565958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.565992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.566195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.566239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.566512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.566545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.566814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.566848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.567050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.567086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.567362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.567398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.567687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.567721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.568000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.568036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.568290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.568324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.568577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.568611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.568790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.568824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.569116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.569151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.569283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.569381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.569568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.569603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.569907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.569942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.570230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.799 [2024-12-10 05:55:46.570266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.799 qpair failed and we were unable to recover it. 00:30:28.799 [2024-12-10 05:55:46.570540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.570573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.570755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.570788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.570969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.571003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.571263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.571299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.571575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.571608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.571889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.571923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.572209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.572253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.572481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.572515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.572722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.572755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.573063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.573281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.573316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.573572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.573605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.573907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.573940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.574239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.574274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.574567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.574600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.574865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.574899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.575150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.575195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.575432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.575467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.575618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.575652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.575835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.575869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.576060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.576421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.576656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.576690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.576960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.576992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.577197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.577239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.577460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.577493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.577696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.577729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.578058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.578265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.578300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.578484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.578516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.578720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.578755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.579014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.579048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.579262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.579297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.579411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.579442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.579716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.579750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.579866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.579899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.580169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.580202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.580422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.580456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.800 [2024-12-10 05:55:46.580688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.800 [2024-12-10 05:55:46.580721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.800 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.580960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.581210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.581490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.581532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.581732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.581767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.582039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.582074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.582359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.582395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.582629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.582663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.582877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.582910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.583089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.583122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.583400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.583436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.583621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.583654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.583786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.583818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.584002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.584034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.584246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.584286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.584583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.584628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.584963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.584997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.585268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.585304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.585598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.585638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.585769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.585801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.585998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.586031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.586148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.586179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.586493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.586528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.586827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.586861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.587161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.587355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.587391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.587541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.587574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.587781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.587815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.588003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.588037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.588314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.588349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.588669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.588703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.589003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.589036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.589177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.589211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.589416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.801 [2024-12-10 05:55:46.589450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.801 qpair failed and we were unable to recover it. 00:30:28.801 [2024-12-10 05:55:46.589763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.589797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.590023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.590056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.590270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.590306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.590556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.590590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.590772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.590804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.591084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.591117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.591246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.591281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.591467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.591500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.591696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.591729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.592014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.592048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.592343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.592377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.592567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.592601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.592854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.592887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.593162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.593197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.593317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.593352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.593606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.593640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.593918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.593953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.594239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.594274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.594552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.594585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.594726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.594758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.594949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.594982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.595204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.595246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.595523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.595556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.595757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.595790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.596002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.596040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.596243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.596279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.596558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.596591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.596718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.596751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.596951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.596984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.597169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.597201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.597534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.597570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.597782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.597815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.598028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.598061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.598211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.598259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.598453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.598488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.598788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.598820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.599072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.599105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.599405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.599441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.802 [2024-12-10 05:55:46.599725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.802 [2024-12-10 05:55:46.599759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.802 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.599898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.599931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.600238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.600274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.600549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.600583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.600838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.600871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.601173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.601206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.601510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.601544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.601834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.601868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.602140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.602174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.602467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.602503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.602768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.602801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.603099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.603132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.603401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.603437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.603724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.603889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.603923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.604128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.604162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.604294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.604329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.604520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.604553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.604756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.604789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.604972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.605005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.605236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.605272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.605573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.605607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.605806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.605839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.606087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.606120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.606380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.606417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.606667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.606701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.606953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.606991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.607208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.607257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.607529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.607562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.607841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.607873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.608109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.608142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.608345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.608380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.608576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.608609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.608879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.608913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.609114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.609147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.609381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.609656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.609689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.609996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.610029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.610294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.610330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.803 [2024-12-10 05:55:46.610559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.803 [2024-12-10 05:55:46.610593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.803 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.610794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.610828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.611021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.611054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.611331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.611366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.611586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.611899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.611932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.612112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.612145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.612414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.612639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.612672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.612863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.612897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.613174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.613208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.613400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.613433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.613585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.613617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.613891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.613925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.614121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.614153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.614332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.614366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.614583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.614616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.614876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.614909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.615150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.615184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.615413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.615447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.615641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.615674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.615949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.615983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.616243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.616278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.616555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.616589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.616873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.617188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.617229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.617485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.617518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.617816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.617854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.618116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.618150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.618406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.618440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.618740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.618773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.618974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.619008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.619288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.619324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.619579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.619613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.619884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.619918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.620137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.620170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.620447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.620482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.620746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.620779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.621028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.621060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.804 [2024-12-10 05:55:46.621280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.804 [2024-12-10 05:55:46.621316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.804 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.621626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.621882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.621916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.622215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.622274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.622504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.622537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.622758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.622792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.623014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.623048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.623325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.623361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.623645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.623678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.623955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.623989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.624270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.624306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.624589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.624623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.624902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.624935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.625057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.625092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.625366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.625403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.625660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.625693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.625962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.625995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.626238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.626274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.626422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.626455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.626585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.626618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.626834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.626868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.627070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.627102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.627406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.627443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.627719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.627752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.628025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.628058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.628359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.628395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.628523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.628556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.628853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.628887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.629070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.629113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.629396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.629678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.629712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.629964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.629996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.630261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.630297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.630581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.630614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.630900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.630933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.631210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.631253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.631492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.631526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.631785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.631818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.632041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.632074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.805 [2024-12-10 05:55:46.632338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.805 [2024-12-10 05:55:46.632373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.805 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.632665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.632697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.632972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.633006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.633264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.633300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.633574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.633607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.633812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.633846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.634114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.634148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.634285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.634320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.634569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.634602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.634798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.634832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.635022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.635055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.635277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.635550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.635584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.635855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.635888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.636178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.636211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.636510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.636544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.636754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.636787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.636977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.637010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.637268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.637305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.637487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.637520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.637769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.637803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.638053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.638086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.638384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.638420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.638708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.638741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.638860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.638893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.639152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.639358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.639393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.639585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.639619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.639871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.639904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.640153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.640192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.640499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.640533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.640822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.640855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.641152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.641186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.641342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.641377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.641651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.806 [2024-12-10 05:55:46.641684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.806 qpair failed and we were unable to recover it. 00:30:28.806 [2024-12-10 05:55:46.641963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.641996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.642214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.642260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.642563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.642597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.642876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.642909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.643106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.643139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.643352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.643388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.643588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.643621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.643921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.643954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.644173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.644208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.644403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.644438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.644738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.644771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.645005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.645038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.645183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.645229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.645434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.645468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.645672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.645707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.645893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.645927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.646177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.646210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.646432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.646467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.646673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.646707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.646826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.646860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.647075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.647109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.647416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.647452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.647705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.647738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.648047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.648080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.648362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.648397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.648651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.648684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.648989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.649022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.649332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.649368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.649630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.649663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.649936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.649969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.650250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.650285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.650568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.650601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.650804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.650837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.650960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.650993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.651269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.651578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.651612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.651900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.651933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.652120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.652152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.652411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.652447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.807 qpair failed and we were unable to recover it. 00:30:28.807 [2024-12-10 05:55:46.652676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.807 [2024-12-10 05:55:46.652708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.652925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.652959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.653184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.653228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.653414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.653446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.653690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.653723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.653947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.653980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.654187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.654229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.654509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.654542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.654743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.654777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.655056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.655091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.655367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.655403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.655606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.655639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.655845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.655878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.656128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.656161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.656473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.656508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.656766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.656799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.657098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.657132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.657402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.657438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.657643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.657675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.657856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.657890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.658075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.658109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.658363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.658399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.658587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.658621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.658814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.658847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.659045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.659079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.659304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.659340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.659526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.659559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.659846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.659880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.660086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.660119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.660351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.660387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.660590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.660623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.660875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.660908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.661034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.661068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.661353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.661389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.661659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.661693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.661953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.661992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.662293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.662329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.662582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.662615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.662864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.662897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.808 qpair failed and we were unable to recover it. 00:30:28.808 [2024-12-10 05:55:46.663089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.808 [2024-12-10 05:55:46.663122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.663409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.663445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.663721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.663754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.663958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.663992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.664202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.664246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.664429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.664462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.664677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.664710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.664984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.665018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.665159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.665453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.665487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.665675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.665709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.665991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.666024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.666269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.666306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.666524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.666557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.666821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.666855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.667130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.667164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.667399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.667434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.667646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.667680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.667863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.667897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.668102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.668136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.668331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.668368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.668547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.668581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.668767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.668801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.669016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.669050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.669254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.669290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.669510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.669544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.669742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.669775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.670047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.670081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.670284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.670320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.670628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.670900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.670934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.671047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.671079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.671207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.809 [2024-12-10 05:55:46.671249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.809 qpair failed and we were unable to recover it. 00:30:28.809 [2024-12-10 05:55:46.671528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.671562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.671694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.671726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.672004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.672038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.672380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.672415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.672601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.672635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.672828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.672862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.673054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.673088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.673271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.673306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.673525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.673558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.673749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.673781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.674050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.674084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.674305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.674341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.674489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.674521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.674820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.674853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.675055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.675088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.675386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.675422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.675711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.675744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.676072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.676364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.676399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.676672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.676705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.676976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.677009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.677216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.677273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.677549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.677583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.677852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.677886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.678024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.678057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.678331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.678366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.678550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.678584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.678859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.678892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.679167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.679201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.679490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.679524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.679800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.679840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.680139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.680173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.680429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.680464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.680614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.810 [2024-12-10 05:55:46.680648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.810 qpair failed and we were unable to recover it. 00:30:28.810 [2024-12-10 05:55:46.680844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.680879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.681079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.681114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.681330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.681365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.681562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.681597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.681896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.681930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.682212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.682259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.682531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.682565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.682844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.682878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.683089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.683124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.683385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.683421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.683631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.683664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.683845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.683879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.684156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.684190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.684454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.684489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.684789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.684823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.685088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.685121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.685317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.685353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.685551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.685584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.685778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.685813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.686016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.686049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.686324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.686360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.686660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.686694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.686974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.687008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.687295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.687331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.687455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.687486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.687762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.687796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.687995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.688029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.688228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.688263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.688585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.688620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.688873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.688907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.689244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.689279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.811 [2024-12-10 05:55:46.689461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.811 [2024-12-10 05:55:46.689494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.811 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.689743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.689775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.689997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.690248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.690284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.690537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.690570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.690750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.690790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.690972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.691006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.691134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.691168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.691382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.691416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.691692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.691726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.691994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.692027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.692250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.692480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.692513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.692767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.692800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.693079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.693113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.693404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.693440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.693708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.693741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.693962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.693996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.694251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.694287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.694578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.694611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.694805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.694839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.694971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.695004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.695295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.695331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.695625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.695658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.695879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.695914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.696111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.696143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.696395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.696430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.696553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.696586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.696875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.696909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.697179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.697213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.697417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.697450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.697706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.697740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.698021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.698055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.698340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.698375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.698651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.698684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.698979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.699013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.699284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.699320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.699596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.812 [2024-12-10 05:55:46.699629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.812 qpair failed and we were unable to recover it. 00:30:28.812 [2024-12-10 05:55:46.699845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.699878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.700091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.700311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.700346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.700533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.700567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.700829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.700862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.701043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.701077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.701357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.701392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.701660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.701698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.701985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.702019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.702234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.702270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.702553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.702586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.702871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.702905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.703102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.703135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.703410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.703446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.703729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.703762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.703956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.703990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.704261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.704296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.704412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.704442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.704712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.704746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.704974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.705006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.705284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.705320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.705608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.705661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.705916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.705949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.706254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.706289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.706549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.706584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.706896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.706929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.707209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.707257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.707485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.707519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.707724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.707757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.707952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.707985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.708261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.708297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.708477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.708510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.708654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.708688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.708911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.708944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.709158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.709191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.709484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.709520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.709785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.709817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.710018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.710051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.813 qpair failed and we were unable to recover it. 00:30:28.813 [2024-12-10 05:55:46.710260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.813 [2024-12-10 05:55:46.710295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.710501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.710534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.710742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.710775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.711026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.711060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.711366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.711401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.711609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.711642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.711913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.711947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.712236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.712271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.712421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.712455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.712564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.712762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.712796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.713046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.713081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.713320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.713355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.713538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.713570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.713823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.713858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.714102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.714370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.714411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.714589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.714641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.714925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.715254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.715291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.715558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.715592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.715728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.716043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.716077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.716332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.716368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.716630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.716685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.716972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.717010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.717141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.717175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.717375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.717414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.717639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.717673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.718002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.718038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.718238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.718273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.718478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.814 [2024-12-10 05:55:46.718512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:28.814 qpair failed and we were unable to recover it. 00:30:28.814 [2024-12-10 05:55:46.718767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.718820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.719039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.719088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.719321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.719370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.719647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.719714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.719998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.720077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.720418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.720485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.720720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.720770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.720998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.721047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.721395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.721433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.721734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.721776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.722073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.722117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.722344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.722388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.722599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.095 [2024-12-10 05:55:46.722641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.095 qpair failed and we were unable to recover it. 00:30:29.095 [2024-12-10 05:55:46.722831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.722877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.723100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.723142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.723300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.723339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.723550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.723585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.723790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.723833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.723987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.724023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.724300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.724338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.724552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.724587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.724783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.724817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.725065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.725099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.725317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.725353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.725487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.725521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.727197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.727280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.727628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.727666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.727952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.727987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.728193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.728242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.728384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.728416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.728693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.728727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.728946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.728980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.729136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.729171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.729410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.729446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.729719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.729755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.730049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.730083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.730264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.730300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.730552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.730586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.730786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.730821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.731010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.731043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.731255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.731291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.731556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.731591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.731788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.731822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.732107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.732141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.732331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.732586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.732620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.732884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.732919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.733170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.733205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.733361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.733395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.733529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.733851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.733886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.096 [2024-12-10 05:55:46.735646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.096 [2024-12-10 05:55:46.735706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.096 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.735951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.735986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.736249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.736287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.736527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.736563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.736781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.736818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.737085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.737119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.737353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.737397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 307075 Killed "${NVMF_APP[@]}" "$@" 00:30:29.097 [2024-12-10 05:55:46.737654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.737689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.737935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.737970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.738163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.738342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.738379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:29.097 [2024-12-10 05:55:46.738517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.738552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.738745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.738779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:29.097 [2024-12-10 05:55:46.739046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.739081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.739235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.739270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.097 [2024-12-10 05:55:46.739453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.739487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.739741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.097 [2024-12-10 05:55:46.739776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.739979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.740019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.097 [2024-12-10 05:55:46.740317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.740354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.740505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.740540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.740739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.740774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.741080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.741115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.741328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.741363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.741553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.741588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.741782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.741816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.742089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.742124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.742289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.742327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.742476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.742510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.742640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.742674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.742828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.742861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.743072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.743112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.743353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.743389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.743607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.743642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.743780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.743813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.744070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.744105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.744298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.744334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.744468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.744499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.097 qpair failed and we were unable to recover it. 00:30:29.097 [2024-12-10 05:55:46.744697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.097 [2024-12-10 05:55:46.744730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.744946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.744979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.745288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.745323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.745565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.745599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.745754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.745791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.745993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.746027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.746237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.746272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.746487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.746523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.746678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.746713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.746932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.746968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.747159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.747194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.747365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.747402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.747533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.747727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.747765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=307887 00:30:29.098 [2024-12-10 05:55:46.747958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.747994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 307887 00:30:29.098 [2024-12-10 05:55:46.748253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.748290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:29.098 [2024-12-10 05:55:46.748488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.748523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 307887 ']' 00:30:29.098 [2024-12-10 05:55:46.748676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.748710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.098 [2024-12-10 05:55:46.748987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.098 [2024-12-10 05:55:46.749295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.749335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.749498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.749533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.098 [2024-12-10 05:55:46.749681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.749721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.098 [2024-12-10 05:55:46.749910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.749948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.750170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.750205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 05:55:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.098 [2024-12-10 05:55:46.750419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.750457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.750663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.750699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.751046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.751083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.751291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.751327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.751482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.751520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.751685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.751720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.751882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.751917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.752173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.752211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.752413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.752450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.752602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.752637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.752903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.098 [2024-12-10 05:55:46.752942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.098 qpair failed and we were unable to recover it. 00:30:29.098 [2024-12-10 05:55:46.753132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.753168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.753339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.753375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.753584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.753620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.753770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.753806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.754039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.754073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.754290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.754326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.754525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.754560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.754766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.754802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.755013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.755046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.755255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.755290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.755450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.755485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.755609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.755643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.755788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.755825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.756051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.756088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.756236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.756272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.756478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.756512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.756657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.756695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.757063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.757100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.757314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.757349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.757563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.757598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.757758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.757797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.757993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.758031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.758160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.758194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.758416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.758452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.758711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.758745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.758989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.759024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.759243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.759279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.759482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.759518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.759747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.759782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.759922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.759958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.760280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.760317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.760470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.760504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.760652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.760686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.760842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.760876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.761024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.761058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.761188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.761235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.761493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.761528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.761655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.761690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.099 [2024-12-10 05:55:46.761830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.099 [2024-12-10 05:55:46.761863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.099 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.762064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.762100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.762292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.762326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.762515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.762551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.762674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.762709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.762904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.762939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.763148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.763182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.763400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.763435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.763655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.763690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.763835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.763872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.764089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.764123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.764384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.764421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.764621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.764656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.764917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.764953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.765115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.765151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.765377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.765415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.765609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.765643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.765828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.765952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.765988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.766253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.766294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.766483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.766518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.766657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.766691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.766975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.767017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.767238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.767274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.767485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.767520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.767725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.767759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.767902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.767937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.768235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.768274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.768476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.768512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.768656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.768691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.768991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.769027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.769289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.769333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.769485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.769520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.769825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.769865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-12-10 05:55:46.770142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.100 [2024-12-10 05:55:46.770181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.770424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.770460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.770664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.770702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.771015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.771175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.771210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.771414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.771450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.771649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.771686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.771848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.771884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.772065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.772099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.772256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.772293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.772496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.772533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.772766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.772801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.773018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.773053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.773245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.773281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.773496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.773531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.773828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.774153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.774196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.774410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.774448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.774655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.774690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.774817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.774853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.775051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.775085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.775210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.775261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.775523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.775556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.775803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.775838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.776043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.776080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.776294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.776329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.776535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.776568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.776723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.776757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.776996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.777041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.777181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.777216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.777448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.777485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.777627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.777664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.777802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.777837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.778040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.778074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.778274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.778311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.778468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.778503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.778710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.778744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.779028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.779061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.779242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.779278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-12-10 05:55:46.779422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.101 [2024-12-10 05:55:46.779455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.779712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.779747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.780017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.780051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.780276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.780313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.780461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.780497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.780714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.780964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.780998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.781203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.781256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.781389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.781425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.781553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.781587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.781861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.781895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.782104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.782143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.782269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.782308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.782517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.782553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.782692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.782728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.782867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.782899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.783107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.783150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.783360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.783396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.783551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.783585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.783728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.783764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.784006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.784042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.784158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.784191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.784449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.784672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.784707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.784905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.784941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.785171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.785206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.785355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.785390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.785532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.785566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.785705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.785741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.785978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.786022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.786141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.786174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.786317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.786352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.786490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.786525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.786815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.786850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.787003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.787038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.787201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.787248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.787453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.787487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.787641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.787676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.787863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.787900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-12-10 05:55:46.788044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.102 [2024-12-10 05:55:46.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.788287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.788322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.788514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.788549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.788700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.788734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.788874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.788909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.789043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.789076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.789208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.789253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.789393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.789429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.789544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.789579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.789802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.789836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.789970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.790004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.790141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.790179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.790380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.790417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.790582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.790774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.790808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.790933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.790970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.791097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.791130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.791258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.791303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.791570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.791649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.791875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.791916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.792136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.792173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.792317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.792353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.792605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.792639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.792899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.792933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.793074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.793108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.793234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.793287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.793422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.793455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.793583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.793618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.793753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.793787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.793930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.793964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.794149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.794193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.794413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.794448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.794563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.794596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.794731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.794764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.794897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.794932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.795161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.795193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.795425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.795461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.795596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.795629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.795780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.795813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-12-10 05:55:46.796014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.103 [2024-12-10 05:55:46.796048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.796243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.796280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.796468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.796504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.796632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.796664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.796804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.796837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.796988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.797024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.797158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.797193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.797326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.797361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.797477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.797512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.797639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.797642] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:30:29.104 [2024-12-10 05:55:46.797673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 [2024-12-10 05:55:46.797691] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.797894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.797929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.798069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.798099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.798238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.798272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.798494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.798529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.798761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.798794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.798928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.798962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.799106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.799141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.799280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.799316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.799472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.799504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.799649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.799684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.799808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.799844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.799974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.800007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.800138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.800171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.800320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.800356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.800558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.800594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.800730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.800763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.800870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.800915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.801139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.801174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.801396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.801432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.801572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.801605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.801742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.801783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.801904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.801938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.802227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.802262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.802387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.104 [2024-12-10 05:55:46.802422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.104 qpair failed and we were unable to recover it. 00:30:29.104 [2024-12-10 05:55:46.802630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.802665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.802850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.802883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.803091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.803126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.803266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.803300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.803453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.803580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.803612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.803801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.803835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.804034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.804072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.804202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.804254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.804367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.804584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.804616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.804807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.804843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.804972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.805005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.805193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.805238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.805356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.805390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.805523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.805557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.805679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.805711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.805903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.805936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.806149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.806184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.806386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.806419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.806548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.806580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.806766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.806798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.806916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.806952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.807149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.807183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.807333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.807374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.807529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.807658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.807691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.807851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.807969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.808001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.808113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.808146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.808320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.808355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.808495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.808529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.808648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.808682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.808874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.808909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.809110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.809145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.809267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.809300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.809496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.809535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.809722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.809754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.809951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.105 [2024-12-10 05:55:46.809985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.105 qpair failed and we were unable to recover it. 00:30:29.105 [2024-12-10 05:55:46.810167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.810202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.810346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.810511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.810546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.810742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.810774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.810893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.810926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.811047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.811080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.811202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.811255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.811387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.811422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.811693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.811728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.811927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.812072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.812106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.812309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.812344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.812473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.812507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.812634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.812668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.812796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.812828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.812969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.813087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.813119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.813243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.813280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.813490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.813525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.813716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.813753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.813863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.813895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.814041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.814191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.814361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.814517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.814681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.814840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.814976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.815009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.815136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.815170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.815360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.815397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.815583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.815617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.815742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.815779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.815955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.815988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.816110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.816143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.816269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.816303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.816431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.816466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.816691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.816725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.106 [2024-12-10 05:55:46.816911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.106 [2024-12-10 05:55:46.816957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.106 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.817098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.817133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.817334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.817372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.817512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.817545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.817730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.817766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.817883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.817917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.818055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.818089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.818238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.818272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.818391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.818426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.818535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.818568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.818762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.818795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.818914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.818947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.819123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.819158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.819284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.819320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.819435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.819469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.819585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.819621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.819751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.819784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.819908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.819940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.820051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.820085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.820207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.820386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.820420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.820636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.820670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.820872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.820907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.821093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.821126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.821240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.821275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.821466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.821500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.821717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.821750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.821866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.821902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.822025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.822057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.107 [2024-12-10 05:55:46.822192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.107 [2024-12-10 05:55:46.822236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.107 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.822381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.822413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.822526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.822560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.822784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.822821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.822968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.823001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.823114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.823148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.823411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.823448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.823732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.823765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.823940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.823973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.824110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.824144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.824283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.824317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.824432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.824473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.824591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.824623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.824754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.824789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.824922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.824955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.825115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.825250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.825288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.825425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.825457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.825577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.825610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.825811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.825847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.825972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.826007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.826118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.826151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.826343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.826383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.826514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.826548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.826690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.826723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.826903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.826936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.827050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.827086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.827273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.827309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.827432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.827466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.827656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.827690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.108 [2024-12-10 05:55:46.827799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.108 [2024-12-10 05:55:46.827835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.108 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.827949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.827982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.829572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.829635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.829811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.829847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.829993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.830030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.830236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.830272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.830529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.830564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.830746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.830782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.830978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.831012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.831239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.831276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.831397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.831433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.831558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.831590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.831773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.831808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.831938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.831971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.832092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.832125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.832301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.832337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.832465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.832497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.832623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.832657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.832802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.832838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.832960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.832995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.833118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.833151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.833346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.833388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.833498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.833529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.833703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.833736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.833861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.833896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.834079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.834112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.834237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.834271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.834387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.834421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.834551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.834585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.834718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.834752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.834949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.834984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.835172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.835206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.835508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.109 [2024-12-10 05:55:46.835543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.109 qpair failed and we were unable to recover it. 00:30:29.109 [2024-12-10 05:55:46.835731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.835764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.835977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.836010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.836159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.836193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.836337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.836370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.836556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.836592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.836717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.836749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.836944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.836979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.837108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.837144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.837278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.837314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.837510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.837545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.837740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.837773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.837908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.837943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.838146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.838179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.838295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.838330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.838443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.838477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.838657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.838733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.838991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.839065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.839254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.839328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.839530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.839568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.839749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.839783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.839977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.840009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.840127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.840160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.840381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.840413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.840602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.840639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.840754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.840787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.840981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.841015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.841178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.841232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.841350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.841386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.841506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.841549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.841755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.841788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.841983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.842140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.842328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.842494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.842651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.842811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.842955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.842984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.843158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.843192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.843362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.110 [2024-12-10 05:55:46.843395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.110 qpair failed and we were unable to recover it. 00:30:29.110 [2024-12-10 05:55:46.843509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.843542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.843660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.843693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.843811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.843845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.844040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.844075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.844255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.844290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.844479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.844512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.844637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.844670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.844879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.844911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.845035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.845069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.845270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.845304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.845418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.845448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.845575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.845606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.845786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.845822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.845958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.846079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.846112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.846306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.846339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.846516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.846562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.846756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.846790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.846937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.846972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.847085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.847117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.847264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.847301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.847416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.847448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.847622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.847654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.847787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.847823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.848065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.848244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.848536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.848686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.111 [2024-12-10 05:55:46.848858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.111 qpair failed and we were unable to recover it. 00:30:29.111 [2024-12-10 05:55:46.848971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.849114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.849269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.849486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.849642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.849797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.849955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.849987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.850171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.850202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.850403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.850437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.850548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.850581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.850775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.850812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.850919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.850953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.851134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.851167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.851310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.851344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.851530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.851568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.851624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3d460 (9): Bad file descriptor 00:30:29.112 [2024-12-10 05:55:46.851769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.851807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.852040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.852071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.852320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.852354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.852464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.852499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.852688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.852723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.852906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.852940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.853040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.853074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.853200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.853244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.854658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.854715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.854939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.854974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.855100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.855133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.855405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.855441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.855654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.855688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.112 [2024-12-10 05:55:46.855871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.112 [2024-12-10 05:55:46.855905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.112 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.856026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.856060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.856245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.856279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.856421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.856458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.856589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.856622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.856752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.856786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.856903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.856936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.857046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.857078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.857270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.857303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.857433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.857467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.857636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.857765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.857801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.857914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.857951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.858060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.858095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.858211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.858254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.858374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.858407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.858536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.858569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.858743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.858778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.858895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.858929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.859111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.859146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.859258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.859292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.859407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.859454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.859583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.859627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.859776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.859820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.860040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.860113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.860257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.860295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.860416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.860448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.860574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.860606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.113 qpair failed and we were unable to recover it. 00:30:29.113 [2024-12-10 05:55:46.860715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.113 [2024-12-10 05:55:46.860749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.860876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.860908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.861151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.861187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.861331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.861371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.861493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.861528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.861706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.861741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.861931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.861965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.862148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.862182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.862356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.862475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.862508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.862636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.862669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.862949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.862983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.863109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.863143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.863255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.863290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.863490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.863523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.863643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.863677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.863785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.863821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.863940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.863974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.864096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.864128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.864324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.864357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.864467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.864500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.864717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.864752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.864867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.864900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.865007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.865047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.865164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.865197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.865403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.865437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.865549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.865584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.865869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.114 [2024-12-10 05:55:46.865902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.114 qpair failed and we were unable to recover it. 00:30:29.114 [2024-12-10 05:55:46.866023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.866057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.866241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.866275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.866498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.866531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.866647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.866680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.866796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.866828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.866941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.867091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.867123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.867398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.867436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.867668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.867863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.867897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.868101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.868135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.868250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.868283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.868461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.868494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.868682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.868715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.868877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.869061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.869095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.869228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.869262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.869431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.869465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.869585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.869618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.869727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.869761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.869870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.869904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.870014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.870047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.870226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.870266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.870384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.870419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.870545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.870579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.870800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.870915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.870948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.871121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.871156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.871292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.871325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.871449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.871482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.871600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.871635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.871874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.871907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.872086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.872119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.872294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.872331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.872507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.872539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.872646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.872680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.872807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.872846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.873049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.115 [2024-12-10 05:55:46.873100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.115 qpair failed and we were unable to recover it. 00:30:29.115 [2024-12-10 05:55:46.873291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.873327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.873513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.873546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.873730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.873763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.873907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.873939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.874056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.874090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.874274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.874311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.874418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.874449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.874583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.874760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.874793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.874914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.874946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.875070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.875103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.875214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.875265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.875372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.875406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.875531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.875563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.875744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.875778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.875898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.875930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.876105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.876139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.876263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.876298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.876503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.876537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.876735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.876768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.876877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.876909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.877120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.877302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.877336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.877458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.877489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.877670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.877703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.877814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.877847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.877970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.878004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.878201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.878272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.878478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.878512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.878705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.878739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.878933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.878966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.879104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.879139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.879271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.879307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.879422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.879455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.879626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.879658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.116 [2024-12-10 05:55:46.879787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.116 [2024-12-10 05:55:46.879821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.116 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.880055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.880205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.880402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.880557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.880706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.880857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.880970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.881003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.881119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.881151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.881284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.881320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.881493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.881525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.881673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.881706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.881877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.881911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.882052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.882084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.882368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.882401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.882514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.882545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.882733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.882772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.883015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.883048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.883241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.883275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.883383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.883417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.883606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.883640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.883834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.883868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.883978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.884011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.884215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.884264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.884474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.884507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.884682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.884715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.884898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.884932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.885045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.885077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.885209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.885256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.885397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.885430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.885704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.885737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.117 [2024-12-10 05:55:46.885972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.117 [2024-12-10 05:55:46.886005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.117 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.886114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.886147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.886363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.886403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.886522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.886554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.886754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.886786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.886892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.886924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.887133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.887167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.887424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.887457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.887591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.887625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.887749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.887782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.887974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.888007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.888138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.888172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.888375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.888410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.888535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.888569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.888685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.888717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.888968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.889002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 [2024-12-10 05:55:46.889006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.889140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.889173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.889308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.889342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.889605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.889639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.889780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.889812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.889941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.889975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.890091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.890122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.890298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.890331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.890525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.890560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.890767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.890800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.891036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.891076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.891371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.891408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.891524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.891557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.891734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.118 [2024-12-10 05:55:46.891767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.118 qpair failed and we were unable to recover it. 00:30:29.118 [2024-12-10 05:55:46.891890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.891922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.892116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.892149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.892338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.892372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.892497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.892530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.892703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.892738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.892943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.892975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.893089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.893121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.893317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.893353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.893477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.893509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.893701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.893734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.893846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.893881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.894063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.894096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.894206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.894251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.894425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.894459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.894651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.894685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.894885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.894918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.895025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.895058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.895306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.895339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.895583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.895616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.895797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.895828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.896014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.896047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.896166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.896200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.896466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.896499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.896615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.896655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.896943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.896976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.897112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.897145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.897331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.897368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.897564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.897598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.897769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.897803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.897916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.897950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.119 [2024-12-10 05:55:46.898123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.119 [2024-12-10 05:55:46.898158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.119 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.898342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.898377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.898516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.898552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.898726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.898760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.898926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.899046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.899080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.899214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.899259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.899404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.899438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.899622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.899657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.899846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.899880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.900097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.900131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.900344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.900381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.900509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.900542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.900783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.900818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.900998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.901034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.901151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.901184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.901404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.901449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.901757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.901790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.901893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.901927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.902060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.902092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.902265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.902312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.902515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.902548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.902672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.902704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.902873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.902906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.903068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.120 [2024-12-10 05:55:46.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.120 [2024-12-10 05:55:46.903240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.120 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.903365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.903399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.903505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.903538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.903656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.903688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.903815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.903847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.903951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.903984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.904099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.904131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.904316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.904351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.904540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.904573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.904850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.904885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.905121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.905155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.905366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.905399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.905515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.905548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.905682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.905715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.905833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.905864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.906080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.906114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.906321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.906355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.906535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.906569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.906756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.906789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.906911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.906945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.907156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.907345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.907378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.907649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.907681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.907921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.907954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.908150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.908185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.908309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.908343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.908528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.908697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.908731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.908907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.908940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.909064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.909097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.121 qpair failed and we were unable to recover it. 00:30:29.121 [2024-12-10 05:55:46.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.121 [2024-12-10 05:55:46.909419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.909663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.909696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.909824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.909856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.909976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.910010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.910154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.910187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.910326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.910365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.910577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.910611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.910740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.910773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.910886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.910918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.911043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.911076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.911264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.911299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.911492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.911525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.911659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.911693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.911927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.911961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.912135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.912169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.912307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.912341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.912553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.912586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.912714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.912748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.912916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.912949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.913061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.913093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.913199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.913241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.913371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.913404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.913522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.913554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.913664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.913698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.913972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.914005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.914189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.914230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.914364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.914398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.914572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.914605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.914813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.122 [2024-12-10 05:55:46.915056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.122 [2024-12-10 05:55:46.915091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.122 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.915275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.915309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.915575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.915609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.915749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.915799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.915932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.915965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.916086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.916120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.916390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.916428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.916554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.916587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.916710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.916742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.916871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.916902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.917102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.917136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.917334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.917367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.917564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.917596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.917711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.917744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.917848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.917882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.918093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.918127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.918240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.918282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.918395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.918428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.918602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.918638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.918818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.918852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.918964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.918997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.919238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.919273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.919446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.919479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.919644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.919678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.919865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.919898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.920033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.920064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.920261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.920296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.920414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.920448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.920636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.920668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.920774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.920806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.920940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.920974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.921076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.921109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.921287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.921321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.921496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.921529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.921639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.921798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.921830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.922017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.123 [2024-12-10 05:55:46.922050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.123 qpair failed and we were unable to recover it. 00:30:29.123 [2024-12-10 05:55:46.922160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.922194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.922311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.922345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.922450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.922482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.922613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.922645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.922835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.922867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.923039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.923070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.923196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.923249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.923435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.923470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.923644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.923678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.923892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.923923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.924107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.924140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.924313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.924346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.924547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.924580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.924695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.924728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.924832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.924866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.925006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.925038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.925155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.925186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.925376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.925414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.925595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.925631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.925759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.925792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.926012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.926045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.926163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.926197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.926312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.926346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.926530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.926564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.926757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.926791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.926993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.927025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.927210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.927261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.927369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.927403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.927536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.927568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.927690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.927722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.927894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.927926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.928046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.928080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.928290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.928325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.928449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.928480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.928724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.928759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.124 [2024-12-10 05:55:46.928879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.124 [2024-12-10 05:55:46.928912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.124 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.929026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.929058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.929247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.929280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.929479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.929512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.929684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.929718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.929970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.930003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.930128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.930162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.930293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.930327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.930520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.930553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.930798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.930830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.930957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.930991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.931173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.931211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.931412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.931445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.931733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.931767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.931981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.932016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.932148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.932180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.932300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.932333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.932487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.932523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.932784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.932817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.932936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.932970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.933148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.933180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.933390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.933424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.933602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.933638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.933774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.933807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.933928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.933960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.934115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.125 [2024-12-10 05:55:46.934142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.125 [2024-12-10 05:55:46.934151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.125 [2024-12-10 05:55:46.934158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.125 [2024-12-10 05:55:46.934164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.125 [2024-12-10 05:55:46.934161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.934194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.934458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.934491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.934607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.934639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.934860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.934894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.935065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.935098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.935252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.935286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.935454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.125 [2024-12-10 05:55:46.935487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.125 qpair failed and we were unable to recover it. 00:30:29.125 [2024-12-10 05:55:46.935618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.935650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.935771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.935804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.935848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:29.126 [2024-12-10 05:55:46.935972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.936003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.935976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:29.126 [2024-12-10 05:55:46.936104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:29.126 [2024-12-10 05:55:46.936105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:29.126 [2024-12-10 05:55:46.936252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.936289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.936543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.936579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.936888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.936920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.937059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.937091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.937273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.937306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.937501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.937533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.937741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.937774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.937892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.937927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.938040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.938074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.938281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.938315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.938441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.938476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.938660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.938694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.938876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.938910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.939111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.939151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.939287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.939326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.939457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.939489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.939622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.939654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.939920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.939953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.940123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.940156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.940384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.940418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.940530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.940563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.940771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.940803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.940991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.941025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.941152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.941185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.941335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.941386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.941579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.941617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.941722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.941759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.941872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.941906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.942092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.942125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.942317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.942351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.942602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.942635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.942765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.942983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.943016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.943152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.943185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.126 [2024-12-10 05:55:46.943334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.126 [2024-12-10 05:55:46.943372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.126 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.943570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.943604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.943735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.943769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.943952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.943986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.944101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.944135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.944256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.944290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.944420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.944454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.944629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.944663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.944842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.944876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.944990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.945024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.945135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.945168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.945294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.945329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.945504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.945537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.945688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.945812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.945845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.946028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.946062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.946260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.946295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.946472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.946504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.946624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.946658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.946787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.946826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.947017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.947051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.947294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.947328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.947439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.947724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.947759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.948006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.948041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.948253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.948288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.948400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.948432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.948557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.948591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.948773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.948998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.949032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.949202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.949242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.949432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.949582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.949617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.949753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.949786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.949967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.950172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.950208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.950324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.950360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.950546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.950581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.950699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.950732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.127 [2024-12-10 05:55:46.950940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.127 qpair failed and we were unable to recover it. 00:30:29.127 [2024-12-10 05:55:46.951083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.951117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.951251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.951287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.951468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.951505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.951633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.951669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.951794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.951828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.952031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.952067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.952257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.952300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.952418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.952453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.952639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.952673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.952860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.952895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.953027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.953061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.953182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.953224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.953336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.953369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.953550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.953586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.953688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.953723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.953840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.953873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.954067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.954103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.954245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.954280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.954493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.954529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.954654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.954686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.954965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.955002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.955137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.955171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.955308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.955346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.955542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.955577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.955697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.955731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.955861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.955895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.956101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.956138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.956261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.956296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.956476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.956511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.956633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.956668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.956789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.956824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.957006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.957039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.957153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.957188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.957413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.957455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.957583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.957617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.957795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.957829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.957939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.957976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.958084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.958119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.958259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.958294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.128 qpair failed and we were unable to recover it. 00:30:29.128 [2024-12-10 05:55:46.958439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.128 [2024-12-10 05:55:46.958474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.958597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.958631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.958810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.958845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.959033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.959069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.959189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.959232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.959346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.959384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.959492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.959528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.959652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.959687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.959825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.959860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.960033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.960068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.960284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.960403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.960437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.960552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.960587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.960700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.960734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.960845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.960880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.961167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.961204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.961330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.961363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.961469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.961505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.961621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.961655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.961767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.961801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.961975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.962011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.962135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.962184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.962350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.962411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.962679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.962713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.962836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.962869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.963108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.963142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.963264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.963299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.963474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.963507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.963638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.963671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.963855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.963887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.964072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.964104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.964250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.964286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.964409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.129 [2024-12-10 05:55:46.964443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.129 qpair failed and we were unable to recover it. 00:30:29.129 [2024-12-10 05:55:46.964558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.964591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.964774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.964807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.964992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.965025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.965153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.965188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.965310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.965350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.965465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.965617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.965651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.965852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.965887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.966000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.966034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.966204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.966250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.966546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.966581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.966698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.966735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.966866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.966904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.967139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.967176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.967305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.967347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.967465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.967500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.967701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.967735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.967845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.967882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.968002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.968039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.968233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.968271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.968388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.968423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.968595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.968631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.968769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.968805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.968923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.968957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.130 qpair failed and we were unable to recover it. 00:30:29.130 [2024-12-10 05:55:46.969139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.130 [2024-12-10 05:55:46.969174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.969312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.969347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.969452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.969488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.969609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.969644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.969864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.969898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.970077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.970111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.970258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.970294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.970437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.970472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.970592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.970626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.970745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.970780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.970901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.970934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.971137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.971171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.971358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.971393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.971566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.971600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.971712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.971745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.971922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.971956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.972098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.972212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.972263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.972387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.972420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.972598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.972631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.972812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.972845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.972971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.973005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.973183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.973216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.973346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.973570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.973603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.973722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.973757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.973879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.973913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.974108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.974142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.974266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.131 [2024-12-10 05:55:46.974302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.131 qpair failed and we were unable to recover it. 00:30:29.131 [2024-12-10 05:55:46.974430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.974463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.974590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.974623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.974757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.974791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.974996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.975028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.975203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.975248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.975442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.975476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.975599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.975632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.975741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.975775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.975952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.975986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.976095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.976127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.976252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.976287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.976463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.976497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.976623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.976657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.976767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.976801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.976912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.976945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.977128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.977164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.977436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.977473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.977589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.977623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.977742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.977777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.977904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.977940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.978120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.978155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.978301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.978337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.978475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.978511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.978645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.978680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.978924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.978958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.979164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.979198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.979369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.979479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.979513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.979630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.979664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.979833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.979874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.980002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.980037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.980229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.980265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.980455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.132 [2024-12-10 05:55:46.980489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.132 qpair failed and we were unable to recover it. 00:30:29.132 [2024-12-10 05:55:46.980756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.980791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.980925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.980961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.981076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.981110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.981374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.981414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.981546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.981581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.981714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.981749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.981869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.981904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.982030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.982064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.982177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.982213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.982348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.982382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.982585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.982620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.982738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.982773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.982956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.982990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.983104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.983137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.983254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.983290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.983530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.983565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.983682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.983717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.983956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.983991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.984194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.984237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.984444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.984480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.984594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.984627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.984742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.984775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.985017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.985054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.985158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.985198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.985331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.985364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.985609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.985643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.985893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.985926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.986167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.986202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.986353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.986388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.986572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.133 qpair failed and we were unable to recover it. 00:30:29.133 [2024-12-10 05:55:46.986719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.133 [2024-12-10 05:55:46.986754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.986993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.987028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.987264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.987300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.987513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.987697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.987731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.987862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.987896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.988074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.988108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.988230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.988265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.988442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.988478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.988586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.988619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.988778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.988961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.988995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.989108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.989141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.989277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.989311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.989557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.989593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.989766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.989800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.990001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.990035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.990228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.990265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.990389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.990424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.990663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.990698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.990938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.990978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.991197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.991241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.991348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.991382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.991569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.991604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.991722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.991756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.991890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.991924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.992106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.992325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.992362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.134 [2024-12-10 05:55:46.992541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.134 [2024-12-10 05:55:46.992576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.134 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.992705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.992739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.992840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.992876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.993062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.993096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.993270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.993304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.993418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.993452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.993657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.993691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.993837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.993946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.993979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.994170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.994205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.994342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.994376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.994496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.994530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.994701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.994735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.994913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.994947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.995204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.995248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.995441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.995474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.995664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.995698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.995831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.995863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.996104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.996137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.996341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.996485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.996519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.996637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.996670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.996849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.996883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.997053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.997087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.997199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.997241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.997351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.997384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.997580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.997613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.997721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.997754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.997861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.997896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.998015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.998047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.998235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.135 [2024-12-10 05:55:46.998271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.135 qpair failed and we were unable to recover it. 00:30:29.135 [2024-12-10 05:55:46.998448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.998482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.998666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.998698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.998913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.998976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.999201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.999260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.999386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.999420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.999612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.999647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.999773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:46.999806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:46.999980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.000014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.000129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.000164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.000296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.000335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.000552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.000585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.000687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.000717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.000884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.000916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.001092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.001123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.001262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.001295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.001481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.001522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.001627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.001658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.001775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.001807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.001915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.001948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.002127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.002355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.002393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.002575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.002608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.002732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.002768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.002901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.002933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.003105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.136 [2024-12-10 05:55:47.003138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.136 qpair failed and we were unable to recover it. 00:30:29.136 [2024-12-10 05:55:47.003317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.003352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.003544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.003575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.003691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.003723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.003918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.003953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.004172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.004207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.004476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.004510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.004699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.004732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.004999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.005032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.005149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.005180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.005403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.005437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.005629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.005662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.005866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.005899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.006154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.006188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.006424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.006462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.006593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.006626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.006810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.006844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.006960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.006995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.007180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.007228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.007470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.007504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.007634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.007668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.007951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.007985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.008103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.008136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.008265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.008300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.008428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.008462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.008648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.008683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.008861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.008895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.009082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.009116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.009405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.009440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.009676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.009709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.009910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.009944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.010065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.010098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.010291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.010328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.010520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.010555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.137 qpair failed and we were unable to recover it. 00:30:29.137 [2024-12-10 05:55:47.010750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.137 [2024-12-10 05:55:47.010783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.010961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.010993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.011178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.011212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.011400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.011434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.011552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.011587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.011689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.011723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.011848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.011882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.012064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.012098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.012268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.012304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.012492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.012526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.012639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.012674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.012848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.012887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.013097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.013131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.013235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.013267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.013452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.013486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.013625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.013659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.013847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.013880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.014065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.014098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.014298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.014332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.014539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.014572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.014760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.014795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.015096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.015130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.015270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.015304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.015412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.015442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.015616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.015649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.015771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.015804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.015935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.015968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.016156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.016192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.016401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.016435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.016623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.016657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.016896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.016930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.017051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.017084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.017207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.017260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.017379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.017412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.017601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.017635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.017739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.017773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.017955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.017989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.018108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.018142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.018289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.018330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.018444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.018476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.018652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.018687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.018972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.019005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.019131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.019164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.019365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.019400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.138 [2024-12-10 05:55:47.019659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.138 [2024-12-10 05:55:47.019694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.138 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.019880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.019913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.020127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.020161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.020294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.020328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.020449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.020484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.020601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.020633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.020814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.020847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.021023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.021056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.021256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.021292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.021436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.021470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.021649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.021682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.021858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.021890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.022019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.022050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.022247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.022293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.022462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.022498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.022683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.022717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.022967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.023000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.023124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.023156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.023268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.023303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.023438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.023472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.023673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.023707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.023851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.023891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.024076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.024109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.024399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.024557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.024591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.024713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.024745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.025013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.025047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.025313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.025348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.025473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.025506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.025675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.025709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.025833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.025866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.026001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.026036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.026228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.026264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.026391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.139 [2024-12-10 05:55:47.026425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.139 qpair failed and we were unable to recover it. 00:30:29.139 [2024-12-10 05:55:47.026604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 [2024-12-10 05:55:47.026637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-12-10 05:55:47.026892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.407 [2024-12-10 05:55:47.026936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-12-10 05:55:47.027145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 [2024-12-10 05:55:47.027179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:29.407 [2024-12-10 05:55:47.027309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 [2024-12-10 05:55:47.027345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-12-10 05:55:47.027553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 [2024-12-10 05:55:47.027589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:29.407 [2024-12-10 05:55:47.027739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 [2024-12-10 05:55:47.027791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.407 [2024-12-10 05:55:47.028003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.407 [2024-12-10 05:55:47.028046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.028197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.028257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.408 [2024-12-10 05:55:47.028458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.028506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.028706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.028749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.029027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.029064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.029189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.029238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.029369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.029411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.029557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.029591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.029829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.029861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.029989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.030025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.030205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.030249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.030463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.030498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.030615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.030649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.030845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.030879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.031064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.031099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.031231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.031267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.031533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.031567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.031756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.031790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.031975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.032010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.032199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.032576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.032611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.032807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.032841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.032964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.033000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.033170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.033203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.033412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.033448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.033589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.033622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.033810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.033845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.034023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.034056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.034244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.034279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.034460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.034493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.034753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.034791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.034912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.034945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.035128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.035161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.035307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.035348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.035466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.035500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.035715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.035748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.035951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.035986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.036183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.036227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.036416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.408 [2024-12-10 05:55:47.036449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-12-10 05:55:47.036629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.036663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.036801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.036833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.037045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.037080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.037252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.037288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.037423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.037457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.037727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.037762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.037882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.037915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.038156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.038190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.038438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.038503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.038707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.038753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.039028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.039063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.039212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.039262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.039385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.039420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.039673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.039706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.039846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.039878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.040054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.040167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.040202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.040470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.040505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.040631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.040665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.040768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.040802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.040918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.040952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.041152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.041194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.041320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.041354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.041535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.041570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.041746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.041779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.041893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.041926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.042058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.042092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.042272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.042307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.042494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.042527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.042650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.042684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.042787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.042819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.042939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.042972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.043096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.043130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.043272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.043306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.043418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.043452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.043638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.043672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.043916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.043951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.044061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.044093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.044306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.044342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.409 [2024-12-10 05:55:47.044464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.409 [2024-12-10 05:55:47.044499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.044625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.044657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.044778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.044811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.044931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.044966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.045077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.045111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.045255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.045289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.045405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.045439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.045552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.045583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.045762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.045796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.045944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.045987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.046108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.046143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.046415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.046451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.046579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.046612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.046733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.046766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.046899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.046932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.047039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.047072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.047265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.047302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.047436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.047471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.047603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.047636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.047771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.047806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.048009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.048044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.048170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.048204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.048419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.048462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.048644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.048677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.048801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.048835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.048952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.048985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.049103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.049134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.049334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.049369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.049560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.049593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.049743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.049855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.050089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.050121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.050241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.050275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.050398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.050432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.050545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.050578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.050691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.050724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.050880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.051009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.051042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.051158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.051191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.410 qpair failed and we were unable to recover it. 00:30:29.410 [2024-12-10 05:55:47.051402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.410 [2024-12-10 05:55:47.051437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.051643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.051678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.051801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.051834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.051966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.051999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.052168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.052202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.052456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.052490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.052622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.052655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.052836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.052871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.052999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.053033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.053142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.053177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.053335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.053388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.053516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.053551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.053674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.053708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.053830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.053863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.053979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.054013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.054140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.054176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.054304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.054338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.054464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.054498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.054622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.054655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.054836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.054869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.054998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.055034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.055138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.055170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.055353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.055387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.055501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.055549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.055792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.055827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.056951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.056984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.057111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.057144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.057260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.057296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.057404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.057436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.057562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.057597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.057724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.411 [2024-12-10 05:55:47.057759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.411 qpair failed and we were unable to recover it. 00:30:29.411 [2024-12-10 05:55:47.057947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.057980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.058086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.058246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.058281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.058387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.058420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.058570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.058842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.058876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.059050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.059085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.059196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.059241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.059365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.059398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.059530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.059563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.059672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.059706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.059824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.059858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.060047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.060208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.060382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.060530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.060674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.060882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.060984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.061018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.061135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.061169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.061304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.061340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.061519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.061554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.061665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.061700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.061803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.061838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.062016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.062049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.062162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.062195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.062318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.062352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.062623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.062657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.062843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.062877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.063051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.063086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.063194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.063241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.063480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.063514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.412 [2024-12-10 05:55:47.063639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.412 [2024-12-10 05:55:47.063673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.412 qpair failed and we were unable to recover it. 00:30:29.413 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.413 [2024-12-10 05:55:47.063795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.064094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.064129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:29.413 [2024-12-10 05:55:47.064260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.064296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.064418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.064451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.413 [2024-12-10 05:55:47.064556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.064590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.064719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.064752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.064891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.413 [2024-12-10 05:55:47.064925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.065054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.065088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.065201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.065243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.065371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.065406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.065573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.065607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.065728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.065761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.065895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.065929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.066044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.066078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.066297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.066331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.066430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.066463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.066610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.066848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.066882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.067051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.067195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.067412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.067552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.067843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.067973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.068005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.068173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.068207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.068330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.068364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.068542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.068575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.068700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.068733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.068859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.068894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.069015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.069048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.069155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.069194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.069341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.069375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.069500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.069535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.069656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.069690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.069812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.069845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.070023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.070057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.413 [2024-12-10 05:55:47.070240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.413 [2024-12-10 05:55:47.070274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.413 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.070384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.070418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.070594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.070628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.070749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.070783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.070894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.070927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.071123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.071156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.071385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.071574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.071607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.071731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.071765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.071940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.071975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.072090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.072122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.072313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.072348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.072449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.072482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.072588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.072622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.072743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.072776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.072879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.072911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.073084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.073117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.073234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.073268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.073440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.073475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.073661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.073694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.073902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.073935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.074055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.074094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.074248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.074284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.074453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.074487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.074608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.074642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.074764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.074798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.074921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.074956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.075144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.075177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.075366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.075400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.075515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.075548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.075720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.075754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.075924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.075958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.076135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.076169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.076296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.076330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.076435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.076469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.076595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.076635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.076774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.076808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.076926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.076959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.077096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.077274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.414 [2024-12-10 05:55:47.077308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.414 qpair failed and we were unable to recover it. 00:30:29.414 [2024-12-10 05:55:47.077422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.077454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.077639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.077673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.077855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.077889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.078058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.078092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.078211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.078268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.078386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.078422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.078557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.078590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.078767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.078799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.078925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.078964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.079135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.079170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.079376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.079409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.079589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.079621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.079810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.079843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.080040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.080072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.080196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.080241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.080378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.080410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.080529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.080562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.080854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.080888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.080995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.081029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.081134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.081166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.081287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.081321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.081506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.081541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.081726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.081759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.081936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.081969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.082073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.082107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.082232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.082267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.082393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.082428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.082613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.082648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.082821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.082854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.082967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.082999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.083117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.083150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.083264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.083298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.083423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.083456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.083568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.083601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.083877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.084004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.084043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.084248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.084282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.084520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.084554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.084678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.415 [2024-12-10 05:55:47.084710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.415 qpair failed and we were unable to recover it. 00:30:29.415 [2024-12-10 05:55:47.084840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.084873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.084994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.085027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.085161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.085195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.085333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.085366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.085542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.085575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.085762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.085797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.085979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.086011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.086200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.086245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.086363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.086395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.086512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.086555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.086727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.086759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.086876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.086909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.087041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.087074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.087184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.087228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.087374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.087559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.087595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.087794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.087827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.087953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.087987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.088104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.088138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.088260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.088294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.088417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.088451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.088567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.088601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.088709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.088741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.088927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.088962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.089070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.089103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.416 qpair failed and we were unable to recover it. 00:30:29.416 [2024-12-10 05:55:47.089210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.416 [2024-12-10 05:55:47.089252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.089529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.089646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.089680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.089787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.089820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.090008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.090042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.090269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.090304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.090425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.090458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.090722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.090755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.090933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.090967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.091099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.091134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.091243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.091277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.091468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.091514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.091710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.091746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.091990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.092024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.092244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.092281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.092472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.092506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.092715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.092750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.092948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.092982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.093106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.093144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.093333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.093369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.417 [2024-12-10 05:55:47.093558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.417 [2024-12-10 05:55:47.093593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.417 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.093835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.093869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.094076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.094267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.094305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.094489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.094531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.094661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.094693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.094880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.094919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.095059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.095096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.095227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.095266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.095527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.095572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.095727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.095773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.095926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.095974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.096096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.096128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.096263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.096301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.096506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.096652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.096685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.096801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.096833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.097015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.097048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.097317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.097353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.097524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.097557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.097739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.097772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.097944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.097977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.098094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.098126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.098243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.098276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.098445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.098478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.098611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.098725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.098757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.099007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.099040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.099158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.099192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 Malloc0 00:30:29.418 [2024-12-10 05:55:47.099305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.099339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.099449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.099482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.099627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.099925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.418 [2024-12-10 05:55:47.100099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.100140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.100249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.100283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:29.418 [2024-12-10 05:55:47.100544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 [2024-12-10 05:55:47.100579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.100752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.418 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.418 [2024-12-10 05:55:47.100787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.418 qpair failed and we were unable to recover it. 00:30:29.418 [2024-12-10 05:55:47.100916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.100948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.419 [2024-12-10 05:55:47.101076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.101297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.101333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.101470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.101503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.101673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.101707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.101827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.101861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.102109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.102144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.102422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.102640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.102675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.102869] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.419 [2024-12-10 05:55:47.102918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.102950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.103070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.103102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.103273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.103306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.103428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.103741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.103775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.103965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.103999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.104178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.104211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.104403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.104674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.104707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.104826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.104858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.105041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.105075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.105252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.105286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.105458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.105491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.105711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.105751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.105884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.105918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.106113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.106148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.106321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.106355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.106526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.106559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.106798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.106832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.106940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.106975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.107170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.107203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.107395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.107429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.107556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.107590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.107706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.107746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.107854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.107888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.108070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.108104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.108287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.108322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.419 [2024-12-10 05:55:47.108438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.108476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.419 [2024-12-10 05:55:47.108606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.419 [2024-12-10 05:55:47.108637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.419 qpair failed and we were unable to recover it. 00:30:29.420 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.420 [2024-12-10 05:55:47.108883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.108918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.109126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.420 [2024-12-10 05:55:47.109159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.109355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.109390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.420 [2024-12-10 05:55:47.109523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.109556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.109752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.109785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.109969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.110002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.110175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.110209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.110352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.110385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.110557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.110590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.110773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.110808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.110934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.110967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.111153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.111186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2f500 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.111338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.111376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.111550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.111584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.111691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.111723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.111843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.111876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.112054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.112087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.112232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.112265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.112441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.112472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.112653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.112693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.112817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.112851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.113036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.113068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.113289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.113321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.113491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.113522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.113712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.113745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.113993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.114025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.114207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.114248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.114425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.114457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.114638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.114670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.114840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.114871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.115053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.115085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.115254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.115286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.115550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.115583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.115757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.115791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.115974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.116006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.116139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 [2024-12-10 05:55:47.116172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.116443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.420 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.420 [2024-12-10 05:55:47.116476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.420 qpair failed and we were unable to recover it. 00:30:29.420 [2024-12-10 05:55:47.116660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.116691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:29.421 [2024-12-10 05:55:47.116859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.116892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.421 [2024-12-10 05:55:47.117158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1454000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.117409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.117449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.421 [2024-12-10 05:55:47.117585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.117618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.117823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.117855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.118106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.118138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f145c000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.118304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.118340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.118470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.118503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.118692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.118724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.118845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.118878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.119147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.119180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.119363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.119396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.119517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.119549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.119792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.119826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.119953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.119986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.120098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.120130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.120394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.120428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.120666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.120700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.120878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.121100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.121134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.121405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.121440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.121546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.121579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.121750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.121782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.121887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.121920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.122030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.122062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.122247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.122280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.122390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.122421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.122536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.122568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.122736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.122769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.122942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.122975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.123213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.123262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.123398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.421 [2024-12-10 05:55:47.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.421 qpair failed and we were unable to recover it. 00:30:29.421 [2024-12-10 05:55:47.123634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.123667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.123867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.123900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.124077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.124109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.124309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.124342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.422 [2024-12-10 05:55:47.124623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.124657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.124836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.124868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.125087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.125119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.422 [2024-12-10 05:55:47.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.125349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.422 [2024-12-10 05:55:47.125523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.125557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.125748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.125780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.125900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.125932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.126144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.126176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.126431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.126472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.126686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.126719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.126850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.126881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.127307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.127341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.127514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.127547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.127667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.127699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.127910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.422 [2024-12-10 05:55:47.127943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1450000b90 with addr=10.0.0.2, port=4420 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.127989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.422 [2024-12-10 05:55:47.133545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.422 [2024-12-10 05:55:47.133674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.422 [2024-12-10 05:55:47.133718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.422 [2024-12-10 05:55:47.133740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.422 [2024-12-10 05:55:47.133761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.422 [2024-12-10 05:55:47.133813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.422 05:55:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 307320 00:30:29.422 [2024-12-10 05:55:47.143442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.422 [2024-12-10 05:55:47.143528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.422 [2024-12-10 05:55:47.143555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.422 [2024-12-10 05:55:47.143569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.422 [2024-12-10 05:55:47.143582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.422 [2024-12-10 05:55:47.143613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.153449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.422 [2024-12-10 05:55:47.153515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.422 [2024-12-10 05:55:47.153533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.422 [2024-12-10 05:55:47.153544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.422 [2024-12-10 05:55:47.153553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.422 [2024-12-10 05:55:47.153573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.163452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.422 [2024-12-10 05:55:47.163522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.422 [2024-12-10 05:55:47.163563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.422 [2024-12-10 05:55:47.163570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.422 [2024-12-10 05:55:47.163577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.422 [2024-12-10 05:55:47.163606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.422 [2024-12-10 05:55:47.173481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.422 [2024-12-10 05:55:47.173540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.422 [2024-12-10 05:55:47.173554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.422 [2024-12-10 05:55:47.173561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.422 [2024-12-10 05:55:47.173567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.422 [2024-12-10 05:55:47.173583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.422 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.183453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.183518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.183532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.183540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.183546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.183561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.193476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.193536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.193549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.193556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.193563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.193578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.203503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.203561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.203575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.203583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.203589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.203604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.213556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.213613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.213626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.213633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.213639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.213654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.223616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.223715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.223728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.223740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.223746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.223761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.233521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.233586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.233598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.233606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.233612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.233627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.243622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.243680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.243694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.243701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.243707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.243721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.253576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.253632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.253645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.253653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.253659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.253674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.263664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.263720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.263735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.263742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.263748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.263766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.273690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.273745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.273759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.273766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.273773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.273788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.283729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.283785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.283798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.283805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.283811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.283827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.293764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.293822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.293836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.293842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.293849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.293864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.303796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.303852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.423 [2024-12-10 05:55:47.303865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.423 [2024-12-10 05:55:47.303871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.423 [2024-12-10 05:55:47.303878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.423 [2024-12-10 05:55:47.303893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.423 qpair failed and we were unable to recover it. 00:30:29.423 [2024-12-10 05:55:47.313809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.423 [2024-12-10 05:55:47.313865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.424 [2024-12-10 05:55:47.313878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.424 [2024-12-10 05:55:47.313885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.424 [2024-12-10 05:55:47.313892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.424 [2024-12-10 05:55:47.313905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.424 qpair failed and we were unable to recover it. 00:30:29.424 [2024-12-10 05:55:47.323850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.424 [2024-12-10 05:55:47.323904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.424 [2024-12-10 05:55:47.323917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.424 [2024-12-10 05:55:47.323924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.424 [2024-12-10 05:55:47.323930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.424 [2024-12-10 05:55:47.323945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.424 qpair failed and we were unable to recover it. 00:30:29.424 [2024-12-10 05:55:47.333909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.424 [2024-12-10 05:55:47.333962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.424 [2024-12-10 05:55:47.333975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.424 [2024-12-10 05:55:47.333982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.424 [2024-12-10 05:55:47.333989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.424 [2024-12-10 05:55:47.334005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.424 qpair failed and we were unable to recover it. 00:30:29.424 [2024-12-10 05:55:47.343898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.424 [2024-12-10 05:55:47.343954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.424 [2024-12-10 05:55:47.343968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.424 [2024-12-10 05:55:47.343975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.424 [2024-12-10 05:55:47.343982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.424 [2024-12-10 05:55:47.343997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.424 qpair failed and we were unable to recover it. 00:30:29.683 [2024-12-10 05:55:47.353863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.683 [2024-12-10 05:55:47.353924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.683 [2024-12-10 05:55:47.353940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.683 [2024-12-10 05:55:47.353947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.683 [2024-12-10 05:55:47.353953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.683 [2024-12-10 05:55:47.353968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.683 qpair failed and we were unable to recover it. 00:30:29.683 [2024-12-10 05:55:47.363968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.683 [2024-12-10 05:55:47.364029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.683 [2024-12-10 05:55:47.364042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.683 [2024-12-10 05:55:47.364049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.683 [2024-12-10 05:55:47.364055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.683 [2024-12-10 05:55:47.364070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.683 qpair failed and we were unable to recover it. 00:30:29.683 [2024-12-10 05:55:47.373987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.683 [2024-12-10 05:55:47.374042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.683 [2024-12-10 05:55:47.374054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.374061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.374068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.374083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.384029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.384087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.384100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.384107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.384114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.384129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.394048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.394102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.394116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.394123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.394132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.394147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.404094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.404160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.404173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.404181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.404187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.404202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.414123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.414188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.414201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.414209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.414216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.414234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.424134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.424184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.424197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.424204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.424210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.424229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.434146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.434198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.434212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.434222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.434229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.434244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.444195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.444261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.444274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.444281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.444287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.444303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.454209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.454272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.454285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.454292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.454298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.454313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.464281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.464351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.464364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.464372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.464378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.464393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.474177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.474251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.474265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.474271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.474278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.474294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.484235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.484288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.484305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.484311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.484318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.484333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.494322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.494376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.494389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.494396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.494402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.494416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.684 qpair failed and we were unable to recover it. 00:30:29.684 [2024-12-10 05:55:47.504421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.684 [2024-12-10 05:55:47.504481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.684 [2024-12-10 05:55:47.504494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.684 [2024-12-10 05:55:47.504501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.684 [2024-12-10 05:55:47.504507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.684 [2024-12-10 05:55:47.504522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.514366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.514460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.514472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.514479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.514485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.514499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.524417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.524470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.524483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.524490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.524499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.524514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.534440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.534497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.534510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.534517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.534524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.534538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.544467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.544522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.544536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.544542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.544549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.544563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.554503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.554556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.554569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.554576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.554583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.554597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.564543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.564600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.564613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.564620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.564626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.564641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.574577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.574631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.574645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.574651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.574658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.574673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.584597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.584658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.584671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.584678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.584684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.584700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.594619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.594674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.594687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.594694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.594700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.594714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.604657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.604714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.604727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.604734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.604741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.604754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.614629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.614687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.614704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.614711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.614716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.614731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.624719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.624774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.624787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.624793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.624799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.624814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.685 [2024-12-10 05:55:47.634765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.685 [2024-12-10 05:55:47.634825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.685 [2024-12-10 05:55:47.634838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.685 [2024-12-10 05:55:47.634844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.685 [2024-12-10 05:55:47.634851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.685 [2024-12-10 05:55:47.634865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.685 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.644723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.644783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.644797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.644804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.644810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.644824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.654825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.654880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.654893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.654903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.654910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.654924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.664816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.664879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.664892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.664900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.664906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.664920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.674873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.674937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.674950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.674956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.674962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.674976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.684881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.684936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.684949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.684956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.684962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.684977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.694906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.694962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.694974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.694981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.694987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.695004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.704973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.705032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.705045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.705052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.705058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.705073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.714957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.715011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.715024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.715031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.715037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.715052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.725000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.725053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.725067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.725073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.725080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.725094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.735015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.735078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.735091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.735098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.735104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.735119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.745056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.945 [2024-12-10 05:55:47.745111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.945 [2024-12-10 05:55:47.745125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.945 [2024-12-10 05:55:47.745131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.945 [2024-12-10 05:55:47.745138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.945 [2024-12-10 05:55:47.745153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.945 qpair failed and we were unable to recover it. 00:30:29.945 [2024-12-10 05:55:47.755081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.755137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.755151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.755157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.755164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.755178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.765049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.765104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.765120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.765127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.765133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.765148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.775130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.775184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.775198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.775205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.775211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.775230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.785188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.785245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.785258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.785268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.785274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.785289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.795166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.795228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.795241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.795248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.795254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.795269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.805216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.805277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.805290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.805297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.805304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.805319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.815214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.815269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.815281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.815289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.815294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.815310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.825273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.825324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.825338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.825345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.825351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.825369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.835291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.835347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.835360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.835366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.835373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.835387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.845381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.845483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.845496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.845503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.845509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.845523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.855396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.855452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.855465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.855472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.855478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.855492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.865385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.865436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.865449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.865456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.865462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.865477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.875428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.946 [2024-12-10 05:55:47.875497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.946 [2024-12-10 05:55:47.875510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.946 [2024-12-10 05:55:47.875517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.946 [2024-12-10 05:55:47.875523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.946 [2024-12-10 05:55:47.875537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.946 qpair failed and we were unable to recover it. 00:30:29.946 [2024-12-10 05:55:47.885452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.947 [2024-12-10 05:55:47.885521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.947 [2024-12-10 05:55:47.885534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.947 [2024-12-10 05:55:47.885541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.947 [2024-12-10 05:55:47.885547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.947 [2024-12-10 05:55:47.885562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.947 qpair failed and we were unable to recover it. 00:30:29.947 [2024-12-10 05:55:47.895434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.947 [2024-12-10 05:55:47.895502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.947 [2024-12-10 05:55:47.895516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.947 [2024-12-10 05:55:47.895523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.947 [2024-12-10 05:55:47.895529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:29.947 [2024-12-10 05:55:47.895544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.947 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-10 05:55:47.905550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.206 [2024-12-10 05:55:47.905624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.206 [2024-12-10 05:55:47.905638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.206 [2024-12-10 05:55:47.905645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.206 [2024-12-10 05:55:47.905652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.206 [2024-12-10 05:55:47.905667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-10 05:55:47.915540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.206 [2024-12-10 05:55:47.915599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.206 [2024-12-10 05:55:47.915618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.206 [2024-12-10 05:55:47.915625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.206 [2024-12-10 05:55:47.915632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.206 [2024-12-10 05:55:47.915647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-10 05:55:47.925562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.206 [2024-12-10 05:55:47.925619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.206 [2024-12-10 05:55:47.925633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.206 [2024-12-10 05:55:47.925640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.206 [2024-12-10 05:55:47.925646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.206 [2024-12-10 05:55:47.925661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-10 05:55:47.935584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.206 [2024-12-10 05:55:47.935639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.206 [2024-12-10 05:55:47.935652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.206 [2024-12-10 05:55:47.935659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.206 [2024-12-10 05:55:47.935666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.206 [2024-12-10 05:55:47.935680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-10 05:55:47.945680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.206 [2024-12-10 05:55:47.945742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.206 [2024-12-10 05:55:47.945755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.206 [2024-12-10 05:55:47.945762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.206 [2024-12-10 05:55:47.945769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.206 [2024-12-10 05:55:47.945783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-10 05:55:47.955697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.206 [2024-12-10 05:55:47.955764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.206 [2024-12-10 05:55:47.955778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.206 [2024-12-10 05:55:47.955785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:47.955794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:47.955808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:47.965714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:47.965780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:47.965794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:47.965800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:47.965807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:47.965822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:47.975747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:47.975804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:47.975817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:47.975824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:47.975831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:47.975846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:47.985676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:47.985730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:47.985744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:47.985750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:47.985757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:47.985772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:47.995684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:47.995737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:47.995750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:47.995757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:47.995763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:47.995778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.005785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.005843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.005856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.005863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.005869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.005883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.015811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.015873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.015887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.015894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.015900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.015915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.025847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.025901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.025914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.025921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.025928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.025943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.035872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.035928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.035941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.035948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.035954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.035970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.045879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.046134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.046155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.046162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.046169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.046187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.055959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.056015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.056028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.056035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.056042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.056057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.065946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.066003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.066017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.066024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.066031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.066045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.076029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.076086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.076100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.076107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.076113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.076128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-10 05:55:48.086026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.207 [2024-12-10 05:55:48.086091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.207 [2024-12-10 05:55:48.086105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.207 [2024-12-10 05:55:48.086112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.207 [2024-12-10 05:55:48.086122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.207 [2024-12-10 05:55:48.086138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.096097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.096194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.096208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.096215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.096225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.096241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.106051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.106125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.106140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.106147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.106153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.106168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.116150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.116205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.116223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.116230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.116237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.116252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.126153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.126222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.126236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.126243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.126249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.126264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.136158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.136221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.136235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.136242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.136249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.136264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.146183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.146240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.146254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.146262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.146268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.146283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-10 05:55:48.156236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.208 [2024-12-10 05:55:48.156300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.208 [2024-12-10 05:55:48.156314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.208 [2024-12-10 05:55:48.156321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.208 [2024-12-10 05:55:48.156327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.208 [2024-12-10 05:55:48.156342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.467 [2024-12-10 05:55:48.166264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.467 [2024-12-10 05:55:48.166337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.467 [2024-12-10 05:55:48.166355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.467 [2024-12-10 05:55:48.166362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.467 [2024-12-10 05:55:48.166369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.467 [2024-12-10 05:55:48.166386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.467 qpair failed and we were unable to recover it. 00:30:30.467 [2024-12-10 05:55:48.176329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.467 [2024-12-10 05:55:48.176398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.467 [2024-12-10 05:55:48.176415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.467 [2024-12-10 05:55:48.176422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.467 [2024-12-10 05:55:48.176428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.467 [2024-12-10 05:55:48.176444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.467 qpair failed and we were unable to recover it. 00:30:30.467 [2024-12-10 05:55:48.186316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.467 [2024-12-10 05:55:48.186371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.467 [2024-12-10 05:55:48.186385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.467 [2024-12-10 05:55:48.186392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.467 [2024-12-10 05:55:48.186398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.467 [2024-12-10 05:55:48.186413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.467 qpair failed and we were unable to recover it. 00:30:30.467 [2024-12-10 05:55:48.196326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.467 [2024-12-10 05:55:48.196377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.467 [2024-12-10 05:55:48.196390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.467 [2024-12-10 05:55:48.196397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.467 [2024-12-10 05:55:48.196404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.467 [2024-12-10 05:55:48.196419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.467 qpair failed and we were unable to recover it. 00:30:30.467 [2024-12-10 05:55:48.206382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.467 [2024-12-10 05:55:48.206439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.467 [2024-12-10 05:55:48.206452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.467 [2024-12-10 05:55:48.206459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.467 [2024-12-10 05:55:48.206465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.467 [2024-12-10 05:55:48.206480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.216353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.216409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.216423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.216434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.216440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.216455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.226354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.226403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.226416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.226423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.226429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.226444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.236436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.236498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.236511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.236518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.236525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.236540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.246488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.246576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.246589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.246596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.246603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.246617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.256520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.256574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.256588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.256596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.256602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.256620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.266482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.266540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.266553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.266560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.266567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.266582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.276555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.276633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.276647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.276654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.276660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.276674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.286629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.286688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.286702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.286709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.286715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.286730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.296560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.296618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.296631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.296638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.296645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.296658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.306674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.306733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.306747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.306753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.306760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.306774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.316668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.316730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.316743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.316750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.316757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.316771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.326659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.326744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.326758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.326765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.326771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.326785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.468 [2024-12-10 05:55:48.336688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.468 [2024-12-10 05:55:48.336744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.468 [2024-12-10 05:55:48.336757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.468 [2024-12-10 05:55:48.336763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.468 [2024-12-10 05:55:48.336770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.468 [2024-12-10 05:55:48.336784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.468 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.346783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.346862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.346875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.346885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.346892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.346906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.356758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.356818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.356831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.356838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.356844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.356858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.366773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.366828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.366842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.366849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.366855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.366870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.376835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.376897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.376911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.376918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.376924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.376938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.386883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.386937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.386951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.386958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.386964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.386983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.396850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.396905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.396918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.396925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.396932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.396946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.406916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.407002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.407015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.407022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.407028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.407042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.469 [2024-12-10 05:55:48.416997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.469 [2024-12-10 05:55:48.417072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.469 [2024-12-10 05:55:48.417090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.469 [2024-12-10 05:55:48.417097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.469 [2024-12-10 05:55:48.417104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.469 [2024-12-10 05:55:48.417120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.469 qpair failed and we were unable to recover it. 00:30:30.728 [2024-12-10 05:55:48.427013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.728 [2024-12-10 05:55:48.427068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.728 [2024-12-10 05:55:48.427085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.728 [2024-12-10 05:55:48.427094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.728 [2024-12-10 05:55:48.427100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.728 [2024-12-10 05:55:48.427117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.728 qpair failed and we were unable to recover it. 00:30:30.728 [2024-12-10 05:55:48.437033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.728 [2024-12-10 05:55:48.437085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.728 [2024-12-10 05:55:48.437099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.728 [2024-12-10 05:55:48.437106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.728 [2024-12-10 05:55:48.437112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.728 [2024-12-10 05:55:48.437128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.728 qpair failed and we were unable to recover it. 00:30:30.728 [2024-12-10 05:55:48.447086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.728 [2024-12-10 05:55:48.447146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.728 [2024-12-10 05:55:48.447160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.728 [2024-12-10 05:55:48.447168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.728 [2024-12-10 05:55:48.447174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.728 [2024-12-10 05:55:48.447189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.728 qpair failed and we were unable to recover it. 00:30:30.728 [2024-12-10 05:55:48.457092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.728 [2024-12-10 05:55:48.457147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.728 [2024-12-10 05:55:48.457160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.728 [2024-12-10 05:55:48.457167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.728 [2024-12-10 05:55:48.457174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.728 [2024-12-10 05:55:48.457188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.728 qpair failed and we were unable to recover it. 00:30:30.728 [2024-12-10 05:55:48.467164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.728 [2024-12-10 05:55:48.467224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.728 [2024-12-10 05:55:48.467238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.728 [2024-12-10 05:55:48.467245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.728 [2024-12-10 05:55:48.467251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.728 [2024-12-10 05:55:48.467266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.728 qpair failed and we were unable to recover it. 00:30:30.728 [2024-12-10 05:55:48.477139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.728 [2024-12-10 05:55:48.477195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.728 [2024-12-10 05:55:48.477213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.728 [2024-12-10 05:55:48.477223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.728 [2024-12-10 05:55:48.477230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.728 [2024-12-10 05:55:48.477245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.487207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.487268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.487282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.487289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.487295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.487309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.497207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.497270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.497283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.497290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.497296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.497311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.507244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.507300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.507313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.507321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.507327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.507341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.517263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.517316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.517329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.517336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.517346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.517364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.527370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.527463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.527477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.527484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.527490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.527505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.537321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.537375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.537389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.537395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.537402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.537417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.547327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.547383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.547396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.547404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.547410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.547425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.557386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.557441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.557454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.557461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.557468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.557482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.567408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.567463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.567476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.567483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.567489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.567504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.577448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.577503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.577516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.577522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.577528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.577542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.587460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.587512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.587525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.587532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.587538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.587554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.597482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.597535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.597548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.597555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.597561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.597576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.607536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.607599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.729 [2024-12-10 05:55:48.607618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.729 [2024-12-10 05:55:48.607625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.729 [2024-12-10 05:55:48.607631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.729 [2024-12-10 05:55:48.607645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.729 qpair failed and we were unable to recover it. 00:30:30.729 [2024-12-10 05:55:48.617543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.729 [2024-12-10 05:55:48.617623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.617636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.617643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.617649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.617663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.730 [2024-12-10 05:55:48.627589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.730 [2024-12-10 05:55:48.627643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.627656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.627663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.627669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.627684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.730 [2024-12-10 05:55:48.637578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.730 [2024-12-10 05:55:48.637634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.637647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.637655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.637661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.637676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.730 [2024-12-10 05:55:48.647618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.730 [2024-12-10 05:55:48.647676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.647690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.647696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.647706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.647720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.730 [2024-12-10 05:55:48.657644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.730 [2024-12-10 05:55:48.657702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.657715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.657722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.657728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.657742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.730 [2024-12-10 05:55:48.667670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.730 [2024-12-10 05:55:48.667727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.667741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.667748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.667754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.667769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.730 [2024-12-10 05:55:48.677681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.730 [2024-12-10 05:55:48.677736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.730 [2024-12-10 05:55:48.677753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.730 [2024-12-10 05:55:48.677761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.730 [2024-12-10 05:55:48.677768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.730 [2024-12-10 05:55:48.677784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.730 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-10 05:55:48.687777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.687833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.687850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.687859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.687866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.687882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.697761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.697820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.697834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.697842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.697848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.697863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.707792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.707844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.707857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.707864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.707870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.707884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.717849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.717900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.717913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.717920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.717927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.717941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.727904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.727963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.727977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.727984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.727990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.728005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.737874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.737931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.737947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.737954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.737960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.737975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.747959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.748014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.748027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.748034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.748041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.748056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.757933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.757991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.758004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.758011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.758018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.758032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.767966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.768020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.768032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.768039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.768045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.768061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.777977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.778030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.778043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.778053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.778060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.778076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.787937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.787994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.788007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.788014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.788021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.788036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.798036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.798088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.798101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.798107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.798114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.798129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.808068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.808125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.990 [2024-12-10 05:55:48.808138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.990 [2024-12-10 05:55:48.808145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.990 [2024-12-10 05:55:48.808151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.990 [2024-12-10 05:55:48.808166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-10 05:55:48.818098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.990 [2024-12-10 05:55:48.818153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.818166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.818173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.818179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.818197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.828123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.828181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.828195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.828202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.828208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.828227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.838184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.838244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.838257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.838265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.838271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.838287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.848189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.848254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.848268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.848276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.848282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.848297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.858216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.858275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.858288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.858295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.858302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.858317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.868245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.868299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.868314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.868321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.868327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.868341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.878266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.878322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.878335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.878342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.878348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.878363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.888321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.888401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.888415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.888422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.888428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.888442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.898336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.898390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.898403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.898410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.898417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.898431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.908379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.908450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.908464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.908474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.908480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.908494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.918390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.918441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.918454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.918461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.918468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.918483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.928425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.928481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.928494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.928501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.928507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.928522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-10 05:55:48.938452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.991 [2024-12-10 05:55:48.938510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.991 [2024-12-10 05:55:48.938527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.991 [2024-12-10 05:55:48.938535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.991 [2024-12-10 05:55:48.938542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:30.991 [2024-12-10 05:55:48.938559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.991 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:48.948480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:48.948536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:48.948553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:48.948561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:48.948567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:48.948588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:48.958538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:48.958600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:48.958615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:48.958623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:48.958629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:48.958645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:48.968575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:48.968630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:48.968643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:48.968650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:48.968656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:48.968671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:48.978589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:48.978657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:48.978671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:48.978678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:48.978684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:48.978700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:48.988588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:48.988653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:48.988667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:48.988674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:48.988680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:48.988695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:48.998616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:48.998673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:48.998687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:48.998696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:48.998702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:48.998716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:49.008694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:49.008755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:49.008768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:49.008775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:49.008782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:49.008798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:49.018628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:49.018684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:49.018698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:49.018707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:49.018714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:49.018730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:49.028754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.250 [2024-12-10 05:55:49.028812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.250 [2024-12-10 05:55:49.028825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.250 [2024-12-10 05:55:49.028832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.250 [2024-12-10 05:55:49.028838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.250 [2024-12-10 05:55:49.028853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.250 qpair failed and we were unable to recover it. 00:30:31.250 [2024-12-10 05:55:49.038661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.038717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.038733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.038740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.038746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.038761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.048771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.048828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.048841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.048848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.048855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.048869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.058811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.058869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.058882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.058889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.058896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.058911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.068864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.068947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.068961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.068968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.068974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.068988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.078859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.078913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.078926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.078933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.078943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.078958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.088902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.088989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.089003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.089010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.089016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.089030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.098931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.099001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.099015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.099022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.099028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.099042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.108989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.109041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.109054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.109061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.109067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.109082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.118969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.119022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.119036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.119043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.119049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.119063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.129016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.129071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.129084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.129091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.129098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.129113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.139092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.139147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.139160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.139167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.139174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.139189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.149067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.149122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.149135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.149142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.149149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.149164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.159098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.159177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.159191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.251 [2024-12-10 05:55:49.159198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.251 [2024-12-10 05:55:49.159204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.251 [2024-12-10 05:55:49.159221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.251 qpair failed and we were unable to recover it. 00:30:31.251 [2024-12-10 05:55:49.169131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.251 [2024-12-10 05:55:49.169187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.251 [2024-12-10 05:55:49.169204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.252 [2024-12-10 05:55:49.169211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.252 [2024-12-10 05:55:49.169221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.252 [2024-12-10 05:55:49.169237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.252 qpair failed and we were unable to recover it. 00:30:31.252 [2024-12-10 05:55:49.179182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.252 [2024-12-10 05:55:49.179245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.252 [2024-12-10 05:55:49.179258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.252 [2024-12-10 05:55:49.179266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.252 [2024-12-10 05:55:49.179272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.252 [2024-12-10 05:55:49.179287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.252 qpair failed and we were unable to recover it. 00:30:31.252 [2024-12-10 05:55:49.189191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.252 [2024-12-10 05:55:49.189248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.252 [2024-12-10 05:55:49.189261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.252 [2024-12-10 05:55:49.189269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.252 [2024-12-10 05:55:49.189276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.252 [2024-12-10 05:55:49.189291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.252 qpair failed and we were unable to recover it. 00:30:31.252 [2024-12-10 05:55:49.199245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.252 [2024-12-10 05:55:49.199308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.252 [2024-12-10 05:55:49.199328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.252 [2024-12-10 05:55:49.199340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.252 [2024-12-10 05:55:49.199350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.252 [2024-12-10 05:55:49.199373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.252 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.209192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.209260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.209278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.209286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.209295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.209313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.219275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.219334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.219348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.219356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.219362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.219377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.229293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.229341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.229355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.229362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.229369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.229384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.239247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.239304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.239317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.239324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.239330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.239345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.249371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.249440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.249454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.249461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.249468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.249483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.259414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.259476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.259489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.259497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.259503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.259518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.269367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.269461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.269475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.269482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.269488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.269503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.279429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.279488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.511 [2024-12-10 05:55:49.279502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.511 [2024-12-10 05:55:49.279510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.511 [2024-12-10 05:55:49.279516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.511 [2024-12-10 05:55:49.279531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.511 qpair failed and we were unable to recover it. 00:30:31.511 [2024-12-10 05:55:49.289485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.511 [2024-12-10 05:55:49.289561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.289574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.289580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.289587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.289601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.299501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.299561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.299574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.299581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.299588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.299602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.309553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.309615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.309628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.309635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.309641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.309656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.319545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.319597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.319610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.319617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.319624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.319638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.329575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.329631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.329644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.329651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.329658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.329672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.339604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.339657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.339670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.339682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.339688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.339703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.349619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.349673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.349686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.349693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.349699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.349715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.359651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.359707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.359720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.359727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.359734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.359749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.369694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.369757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.369771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.369778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.369784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.369799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.379713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.379793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.379807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.379815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.379821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.379839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.389670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.389723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.389737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.389744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.389750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.389765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.399742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.399802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.399815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.399822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.399829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.399843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.409800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.409903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.409917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.409924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.409931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.512 [2024-12-10 05:55:49.409946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.512 qpair failed and we were unable to recover it. 00:30:31.512 [2024-12-10 05:55:49.419830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.512 [2024-12-10 05:55:49.419888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.512 [2024-12-10 05:55:49.419901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.512 [2024-12-10 05:55:49.419908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.512 [2024-12-10 05:55:49.419915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.513 [2024-12-10 05:55:49.419929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.513 qpair failed and we were unable to recover it. 00:30:31.513 [2024-12-10 05:55:49.429864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.513 [2024-12-10 05:55:49.429917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.513 [2024-12-10 05:55:49.429931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.513 [2024-12-10 05:55:49.429939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.513 [2024-12-10 05:55:49.429945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.513 [2024-12-10 05:55:49.429959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.513 qpair failed and we were unable to recover it. 00:30:31.513 [2024-12-10 05:55:49.439822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.513 [2024-12-10 05:55:49.439921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.513 [2024-12-10 05:55:49.439935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.513 [2024-12-10 05:55:49.439941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.513 [2024-12-10 05:55:49.439948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.513 [2024-12-10 05:55:49.439962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.513 qpair failed and we were unable to recover it. 00:30:31.513 [2024-12-10 05:55:49.449858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.513 [2024-12-10 05:55:49.449916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.513 [2024-12-10 05:55:49.449931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.513 [2024-12-10 05:55:49.449938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.513 [2024-12-10 05:55:49.449945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.513 [2024-12-10 05:55:49.449959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.513 qpair failed and we were unable to recover it. 00:30:31.513 [2024-12-10 05:55:49.460013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.513 [2024-12-10 05:55:49.460096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.513 [2024-12-10 05:55:49.460113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.513 [2024-12-10 05:55:49.460121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.513 [2024-12-10 05:55:49.460127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.513 [2024-12-10 05:55:49.460144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.513 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.469905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.469961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.469979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.469990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.469997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.470013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.480013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.480070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.480084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.480091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.480098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.480114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.489966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.490022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.490036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.490042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.490049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.490065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.500007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.500070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.500084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.500092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.500098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.500113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.510029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.510080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.510093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.510100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.510106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.510125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.520110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.520165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.520179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.520186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.520192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.520207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.530141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.530196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.530209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.530220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.530227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.530243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.540202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.540292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.540306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.540313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.772 [2024-12-10 05:55:49.540319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.772 [2024-12-10 05:55:49.540334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.772 qpair failed and we were unable to recover it. 00:30:31.772 [2024-12-10 05:55:49.550141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.772 [2024-12-10 05:55:49.550193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.772 [2024-12-10 05:55:49.550206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.772 [2024-12-10 05:55:49.550213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.550224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.550238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.560194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.560295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.560308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.560315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.560321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.560336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.570213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.570278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.570292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.570299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.570305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.570320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.580237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.580297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.580311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.580318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.580324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.580339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.590272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.590327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.590341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.590347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.590354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.590369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.600383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.600438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.600454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.600461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.600468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.600483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.610334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.610392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.610406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.610413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.610419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.610434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.620415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.620477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.620490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.620497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.620503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.620518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.630440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.630491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.630504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.630511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.630518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.630533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.640522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.640575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.640588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.640595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.640605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.640620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.650437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.650497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.650510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.650518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.650525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.650540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.660562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.660624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.660637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.660645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.660651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.660665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.670510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.670591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.670604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.670611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.670617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.773 [2024-12-10 05:55:49.670631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.773 qpair failed and we were unable to recover it. 00:30:31.773 [2024-12-10 05:55:49.680615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.773 [2024-12-10 05:55:49.680669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.773 [2024-12-10 05:55:49.680682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.773 [2024-12-10 05:55:49.680690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.773 [2024-12-10 05:55:49.680696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.774 [2024-12-10 05:55:49.680711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.774 qpair failed and we were unable to recover it. 00:30:31.774 [2024-12-10 05:55:49.690622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.774 [2024-12-10 05:55:49.690679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.774 [2024-12-10 05:55:49.690692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.774 [2024-12-10 05:55:49.690699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.774 [2024-12-10 05:55:49.690705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.774 [2024-12-10 05:55:49.690720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.774 qpair failed and we were unable to recover it. 00:30:31.774 [2024-12-10 05:55:49.700656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.774 [2024-12-10 05:55:49.700715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.774 [2024-12-10 05:55:49.700728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.774 [2024-12-10 05:55:49.700736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.774 [2024-12-10 05:55:49.700742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.774 [2024-12-10 05:55:49.700757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.774 qpair failed and we were unable to recover it. 00:30:31.774 [2024-12-10 05:55:49.710615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.774 [2024-12-10 05:55:49.710696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.774 [2024-12-10 05:55:49.710710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.774 [2024-12-10 05:55:49.710717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.774 [2024-12-10 05:55:49.710723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.774 [2024-12-10 05:55:49.710737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.774 qpair failed and we were unable to recover it. 00:30:31.774 [2024-12-10 05:55:49.720667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.774 [2024-12-10 05:55:49.720723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.774 [2024-12-10 05:55:49.720740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.774 [2024-12-10 05:55:49.720747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.774 [2024-12-10 05:55:49.720754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:31.774 [2024-12-10 05:55:49.720771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.774 qpair failed and we were unable to recover it. 00:30:32.036 [2024-12-10 05:55:49.730786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.036 [2024-12-10 05:55:49.730870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.036 [2024-12-10 05:55:49.730892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.036 [2024-12-10 05:55:49.730900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.036 [2024-12-10 05:55:49.730906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.036 [2024-12-10 05:55:49.730923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.036 qpair failed and we were unable to recover it. 00:30:32.036 [2024-12-10 05:55:49.740705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.036 [2024-12-10 05:55:49.740760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.036 [2024-12-10 05:55:49.740774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.036 [2024-12-10 05:55:49.740781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.036 [2024-12-10 05:55:49.740788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.036 [2024-12-10 05:55:49.740803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.036 qpair failed and we were unable to recover it. 00:30:32.036 [2024-12-10 05:55:49.750715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.036 [2024-12-10 05:55:49.750771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.036 [2024-12-10 05:55:49.750784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.036 [2024-12-10 05:55:49.750790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.036 [2024-12-10 05:55:49.750797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.036 [2024-12-10 05:55:49.750813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.036 qpair failed and we were unable to recover it. 00:30:32.036 [2024-12-10 05:55:49.760828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.036 [2024-12-10 05:55:49.760901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.036 [2024-12-10 05:55:49.760916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.036 [2024-12-10 05:55:49.760923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.036 [2024-12-10 05:55:49.760929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.036 [2024-12-10 05:55:49.760944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.036 qpair failed and we were unable to recover it. 00:30:32.036 [2024-12-10 05:55:49.770779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.036 [2024-12-10 05:55:49.770836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.036 [2024-12-10 05:55:49.770849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.036 [2024-12-10 05:55:49.770856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.036 [2024-12-10 05:55:49.770866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.770881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.780832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.780922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.780936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.780943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.780949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.780964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.790931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.790990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.791003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.791011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.791017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.791031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.800868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.800919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.800932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.800939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.800945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.800960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.810976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.811031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.811044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.811051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.811058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.811073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.820997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.821055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.821069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.821077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.821083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.821097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.830998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.831050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.831063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.831069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.831077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.831091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.841037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.841088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.841101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.841108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.841115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.841129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.851119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.851175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.851188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.851195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.851201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.851215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.861107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.861162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.861176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.861183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.861190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.861205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.871161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.871215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.871232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.871239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.871245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.871260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.881146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.881195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.881208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.881215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.881225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.881240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.891211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.891286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.891305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.037 [2024-12-10 05:55:49.891313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.037 [2024-12-10 05:55:49.891320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.037 [2024-12-10 05:55:49.891337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.037 qpair failed and we were unable to recover it. 00:30:32.037 [2024-12-10 05:55:49.901245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.037 [2024-12-10 05:55:49.901301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.037 [2024-12-10 05:55:49.901316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.901326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.901332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.901348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.911254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.911310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.911324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.911332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.911338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.911354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.921262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.921317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.921331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.921339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.921346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.921361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.931302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.931359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.931372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.931379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.931385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.931400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.941332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.941390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.941403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.941410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.941417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.941435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.951455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.951526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.951539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.951546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.951552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.951567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.961436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.961492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.961505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.961512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.961518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.961533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.971460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.971517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.971530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.971538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.971543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.971558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.038 [2024-12-10 05:55:49.981492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.038 [2024-12-10 05:55:49.981584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.038 [2024-12-10 05:55:49.981602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.038 [2024-12-10 05:55:49.981609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.038 [2024-12-10 05:55:49.981615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.038 [2024-12-10 05:55:49.981635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.038 qpair failed and we were unable to recover it. 00:30:32.369 [2024-12-10 05:55:49.991493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.369 [2024-12-10 05:55:49.991565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.369 [2024-12-10 05:55:49.991591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.369 [2024-12-10 05:55:49.991605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.369 [2024-12-10 05:55:49.991615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.369 [2024-12-10 05:55:49.991640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.369 qpair failed and we were unable to recover it. 00:30:32.369 [2024-12-10 05:55:50.001551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.369 [2024-12-10 05:55:50.001615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.369 [2024-12-10 05:55:50.001636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.369 [2024-12-10 05:55:50.001644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.369 [2024-12-10 05:55:50.001652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.369 [2024-12-10 05:55:50.001672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.369 qpair failed and we were unable to recover it. 00:30:32.369 [2024-12-10 05:55:50.011567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.369 [2024-12-10 05:55:50.011628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.369 [2024-12-10 05:55:50.011645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.369 [2024-12-10 05:55:50.011654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.369 [2024-12-10 05:55:50.011662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.369 [2024-12-10 05:55:50.011680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.369 qpair failed and we were unable to recover it. 00:30:32.369 [2024-12-10 05:55:50.021578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.369 [2024-12-10 05:55:50.021633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.369 [2024-12-10 05:55:50.021649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.369 [2024-12-10 05:55:50.021657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.369 [2024-12-10 05:55:50.021664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.369 [2024-12-10 05:55:50.021682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.369 qpair failed and we were unable to recover it. 00:30:32.369 [2024-12-10 05:55:50.031525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.369 [2024-12-10 05:55:50.031632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.031649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.031656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.031663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.031678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.041637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.041691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.041706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.041714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.041721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.041737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.051630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.051697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.051711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.051718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.051725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.051741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.061682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.061739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.061753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.061762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.061768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.061783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.071640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.071725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.071740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.071749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.071756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.071776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.081732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.081790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.081804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.081811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.081817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.081833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.091803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.091861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.091875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.091883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.091890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.091906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.101801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.101853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.101867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.101874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.101881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.101897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.111755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.111854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.111868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.111892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.111899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.111915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.121840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.121925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.121940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.121947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.121954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.121970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.131908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.131990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.132004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.132012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.132018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.132034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.141931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.141984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.141997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.142004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.142011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.142027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.151934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.151988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.152001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.152008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.152015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.370 [2024-12-10 05:55:50.152029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.370 qpair failed and we were unable to recover it. 00:30:32.370 [2024-12-10 05:55:50.161953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.370 [2024-12-10 05:55:50.162006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.370 [2024-12-10 05:55:50.162023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.370 [2024-12-10 05:55:50.162031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.370 [2024-12-10 05:55:50.162037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.162053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.171987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.172047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.172061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.172070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.172076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.172092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.182014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.182070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.182084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.182091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.182098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.182113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.192035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.192090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.192103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.192111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.192118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.192133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.202100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.202152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.202166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.202174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.202183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.202199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.212020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.212079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.212093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.212101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.212107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.212123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.222117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.222173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.222186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.222193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.222200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.222214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.232196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.232259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.232273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.232281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.232287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.232304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.242173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.242232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.242246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.242254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.242260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.242275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.252237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.252309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.252322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.252329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.252336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.252352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.262255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.262312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.262326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.262333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.262340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.262355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.272262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.272319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.272334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.272341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.272348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.272364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.282282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.282388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.282402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.282409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.282416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.282431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.371 [2024-12-10 05:55:50.292375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.371 [2024-12-10 05:55:50.292434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.371 [2024-12-10 05:55:50.292452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.371 [2024-12-10 05:55:50.292460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.371 [2024-12-10 05:55:50.292467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.371 [2024-12-10 05:55:50.292483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.371 qpair failed and we were unable to recover it. 00:30:32.372 [2024-12-10 05:55:50.302364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.372 [2024-12-10 05:55:50.302427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.372 [2024-12-10 05:55:50.302441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.372 [2024-12-10 05:55:50.302450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.372 [2024-12-10 05:55:50.302457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.372 [2024-12-10 05:55:50.302473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.372 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.312440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.312513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.312529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.312538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.312546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.312563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.322407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.322464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.322479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.322487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.322494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.322509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.332484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.332591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.332605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.332613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.332622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.332638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.342466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.342525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.342539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.342547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.342554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.342569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.352529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.352588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.352602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.352610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.352617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.352633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.362545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.362604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.362617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.362625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.362631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.362647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.372597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.372707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.372721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.372728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.372735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.372751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.382528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.382588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.382603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.382611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.382617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.382634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.392616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.392672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.392686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.392695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.392702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.392718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.402683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.402740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.402753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.402762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.402769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.402784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.412698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.412760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.412774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.412782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.412789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.412805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.422706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.650 [2024-12-10 05:55:50.422767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.650 [2024-12-10 05:55:50.422781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.650 [2024-12-10 05:55:50.422789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.650 [2024-12-10 05:55:50.422796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.650 [2024-12-10 05:55:50.422812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.650 qpair failed and we were unable to recover it. 00:30:32.650 [2024-12-10 05:55:50.432740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.432792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.432805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.432813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.432820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.432836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.442749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.442798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.442812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.442820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.442827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.442843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.452789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.452845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.452860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.452867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.452874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.452890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.462811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.462872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.462885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.462896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.462902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.462918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.472832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.472890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.472904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.472912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.472919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.472934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.482864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.482923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.482937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.482945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.482952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.482967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.492905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.492963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.492977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.492985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.492992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.493008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.502918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.503022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.503036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.503044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.503051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.503070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.512942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.512999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.513013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.513021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.513028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.513044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.523044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.523129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.523146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.523155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.523162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.523179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.533004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.533062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.533076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.533082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.533089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.533104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.543022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.543083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.543097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.543105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.543111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.543127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.553064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.553144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.553159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.651 [2024-12-10 05:55:50.553166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.651 [2024-12-10 05:55:50.553173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.651 [2024-12-10 05:55:50.553188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.651 qpair failed and we were unable to recover it. 00:30:32.651 [2024-12-10 05:55:50.563089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.651 [2024-12-10 05:55:50.563143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.651 [2024-12-10 05:55:50.563157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.652 [2024-12-10 05:55:50.563164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.652 [2024-12-10 05:55:50.563171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.652 [2024-12-10 05:55:50.563186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.652 qpair failed and we were unable to recover it. 00:30:32.652 [2024-12-10 05:55:50.573125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.652 [2024-12-10 05:55:50.573185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.652 [2024-12-10 05:55:50.573198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.652 [2024-12-10 05:55:50.573206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.652 [2024-12-10 05:55:50.573213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.652 [2024-12-10 05:55:50.573233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.652 qpair failed and we were unable to recover it. 00:30:32.652 [2024-12-10 05:55:50.583154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.652 [2024-12-10 05:55:50.583208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.652 [2024-12-10 05:55:50.583225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.652 [2024-12-10 05:55:50.583232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.652 [2024-12-10 05:55:50.583239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.652 [2024-12-10 05:55:50.583254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.652 qpair failed and we were unable to recover it. 00:30:32.652 [2024-12-10 05:55:50.593177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.652 [2024-12-10 05:55:50.593253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.652 [2024-12-10 05:55:50.593270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.652 [2024-12-10 05:55:50.593277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.652 [2024-12-10 05:55:50.593283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.652 [2024-12-10 05:55:50.593299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.652 qpair failed and we were unable to recover it. 00:30:32.911 [2024-12-10 05:55:50.603275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.911 [2024-12-10 05:55:50.603338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.911 [2024-12-10 05:55:50.603351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.911 [2024-12-10 05:55:50.603359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.911 [2024-12-10 05:55:50.603365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.911 [2024-12-10 05:55:50.603381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.911 qpair failed and we were unable to recover it. 00:30:32.911 [2024-12-10 05:55:50.613276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.911 [2024-12-10 05:55:50.613338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.911 [2024-12-10 05:55:50.613352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.911 [2024-12-10 05:55:50.613361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.911 [2024-12-10 05:55:50.613368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.911 [2024-12-10 05:55:50.613383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.911 qpair failed and we were unable to recover it. 00:30:32.911 [2024-12-10 05:55:50.623226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.911 [2024-12-10 05:55:50.623290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.911 [2024-12-10 05:55:50.623304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.911 [2024-12-10 05:55:50.623311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.911 [2024-12-10 05:55:50.623318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.911 [2024-12-10 05:55:50.623333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.911 qpair failed and we were unable to recover it. 00:30:32.911 [2024-12-10 05:55:50.633305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.911 [2024-12-10 05:55:50.633360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.911 [2024-12-10 05:55:50.633373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.911 [2024-12-10 05:55:50.633380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.911 [2024-12-10 05:55:50.633387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.911 [2024-12-10 05:55:50.633405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.911 qpair failed and we were unable to recover it. 00:30:32.911 [2024-12-10 05:55:50.643371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.911 [2024-12-10 05:55:50.643432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.911 [2024-12-10 05:55:50.643445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.911 [2024-12-10 05:55:50.643453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.911 [2024-12-10 05:55:50.643459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.911 [2024-12-10 05:55:50.643474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.911 qpair failed and we were unable to recover it. 00:30:32.911 [2024-12-10 05:55:50.653367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.911 [2024-12-10 05:55:50.653425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.653439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.653446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.653454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.653468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.663413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.663482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.663497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.663504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.663510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.663525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.673412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.673467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.673479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.673486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.673492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.673507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.683447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.683500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.683513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.683520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.683527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.683542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.693484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.693542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.693556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.693563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.693570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.693585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.703498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.703555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.703569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.703576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.703582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.703597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.713539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.713596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.713609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.713616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.713623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.713638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.723553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.723605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.723622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.723629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.723635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.723650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.733605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.733663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.733677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.733685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.733691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.733707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.743613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.743672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.743685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.743693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.743699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.743715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.753695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.753752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.753766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.753773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.753780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.753795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.763699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.763756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.763770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.763777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.763787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.763802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.773704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.773762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.773776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.773784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.773791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.912 [2024-12-10 05:55:50.773806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.912 qpair failed and we were unable to recover it. 00:30:32.912 [2024-12-10 05:55:50.783718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.912 [2024-12-10 05:55:50.783773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.912 [2024-12-10 05:55:50.783787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.912 [2024-12-10 05:55:50.783794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.912 [2024-12-10 05:55:50.783801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.783816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.793747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.793805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.793819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.793826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.793833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.793848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.803776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.803824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.803838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.803845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.803851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.803866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.813808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.813864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.813878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.813885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.813891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.813906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.823838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.823892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.823905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.823912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.823919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.823934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.833867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.833952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.833966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.833973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.833979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.833994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.843915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.843967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.843981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.843988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.843995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.844010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:32.913 [2024-12-10 05:55:50.853883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.913 [2024-12-10 05:55:50.853981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.913 [2024-12-10 05:55:50.853999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.913 [2024-12-10 05:55:50.854006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.913 [2024-12-10 05:55:50.854013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:32.913 [2024-12-10 05:55:50.854028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.913 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-10 05:55:50.863959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.172 [2024-12-10 05:55:50.864017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.172 [2024-12-10 05:55:50.864031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.172 [2024-12-10 05:55:50.864039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.172 [2024-12-10 05:55:50.864047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.172 [2024-12-10 05:55:50.864065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-10 05:55:50.873999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.172 [2024-12-10 05:55:50.874059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.172 [2024-12-10 05:55:50.874073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.172 [2024-12-10 05:55:50.874080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.172 [2024-12-10 05:55:50.874086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.172 [2024-12-10 05:55:50.874101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-10 05:55:50.884046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.172 [2024-12-10 05:55:50.884116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.172 [2024-12-10 05:55:50.884130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.172 [2024-12-10 05:55:50.884137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.172 [2024-12-10 05:55:50.884144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.172 [2024-12-10 05:55:50.884160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-10 05:55:50.894050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.172 [2024-12-10 05:55:50.894107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.172 [2024-12-10 05:55:50.894120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.894130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.894137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.894152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.904081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.904142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.904155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.904163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.904169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.904184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.914038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.914127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.914141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.914148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.914154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.914169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.924118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.924173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.924186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.924193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.924200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.924214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.934176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.934237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.934250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.934257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.934264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.934280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.944186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.944248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.944262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.944269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.944276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.944292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.954205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.954264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.954277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.954283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.954290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.954305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.964225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.964284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.964298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.964305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.964311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.964326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.974277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.974334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.974348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.974355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.974362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.974377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.984332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.984392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.984406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.984413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.984420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.984435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:50.994350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:50.994411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:50.994425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:50.994432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:50.994439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:50.994455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:51.004343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:51.004400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:51.004413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:51.004420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:51.004427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:51.004441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:51.014399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:51.014454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:51.014467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:51.014474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:51.014481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.173 [2024-12-10 05:55:51.014496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-10 05:55:51.024359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.173 [2024-12-10 05:55:51.024413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.173 [2024-12-10 05:55:51.024427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.173 [2024-12-10 05:55:51.024438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.173 [2024-12-10 05:55:51.024446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.024464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.034473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.034532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.034546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.034553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.034560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.034575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.044410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.044458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.044471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.044478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.044484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.044499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.054507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.054565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.054578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.054585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.054591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.054606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.064523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.064578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.064591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.064598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.064605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.064623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.074560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.074612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.074626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.074633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.074639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.074654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.084518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.084575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.084588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.084595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.084602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.084618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.094643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.094707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.094720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.094728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.094734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.094749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.104615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.104704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.104717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.104725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.104731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.104745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.114608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.114665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.114679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.114686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.114692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.114708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-10 05:55:51.124654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.174 [2024-12-10 05:55:51.124718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.174 [2024-12-10 05:55:51.124731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.174 [2024-12-10 05:55:51.124739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.174 [2024-12-10 05:55:51.124746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.174 [2024-12-10 05:55:51.124761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.134807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.134873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.134886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.134893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.134899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.134914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.144695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.144752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.144765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.144772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.144780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.144795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.154711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.154774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.154791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.154799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.154805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.154820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.164843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.164920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.164934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.164942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.164950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.164964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.174786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.174843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.174855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.174862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.174869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.174883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.184803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.184866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.184879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.184886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.184893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.184907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.194908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.194998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.195011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.195018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.195028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.195042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.204899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.204967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.204980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.204988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.434 [2024-12-10 05:55:51.204994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.434 [2024-12-10 05:55:51.205009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.434 qpair failed and we were unable to recover it. 00:30:33.434 [2024-12-10 05:55:51.214895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.434 [2024-12-10 05:55:51.214951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.434 [2024-12-10 05:55:51.214965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.434 [2024-12-10 05:55:51.214972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.214978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.214993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.224904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.224954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.224968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.224975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.224981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.224996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.234939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.234990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.235004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.235010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.235017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.235032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.244977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.245074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.245095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.245103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.245109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.245131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.255011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.255068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.255082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.255089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.255096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.255112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.265099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.265155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.265168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.265175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.265182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.265197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.275079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.275183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.275198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.275207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.275215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.275237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.285134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.285189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.285205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.285213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.285223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.285238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.295175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.295237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.295251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.295259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.295265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.295281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.305206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.305279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.305294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.305302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.305308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.305323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.315257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.315314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.315327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.315335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.315342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.315357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.325279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.325341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.325355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.325363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.325372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.325387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.335286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.335340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.435 [2024-12-10 05:55:51.335354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.435 [2024-12-10 05:55:51.335361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.435 [2024-12-10 05:55:51.335368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.435 [2024-12-10 05:55:51.335384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.435 qpair failed and we were unable to recover it. 00:30:33.435 [2024-12-10 05:55:51.345311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.435 [2024-12-10 05:55:51.345363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.436 [2024-12-10 05:55:51.345377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.436 [2024-12-10 05:55:51.345384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.436 [2024-12-10 05:55:51.345391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.436 [2024-12-10 05:55:51.345406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.436 qpair failed and we were unable to recover it. 00:30:33.436 [2024-12-10 05:55:51.355331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.436 [2024-12-10 05:55:51.355387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.436 [2024-12-10 05:55:51.355400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.436 [2024-12-10 05:55:51.355407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.436 [2024-12-10 05:55:51.355413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.436 [2024-12-10 05:55:51.355428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.436 qpair failed and we were unable to recover it. 00:30:33.436 [2024-12-10 05:55:51.365362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.436 [2024-12-10 05:55:51.365411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.436 [2024-12-10 05:55:51.365424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.436 [2024-12-10 05:55:51.365431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.436 [2024-12-10 05:55:51.365437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.436 [2024-12-10 05:55:51.365452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.436 qpair failed and we were unable to recover it. 00:30:33.436 [2024-12-10 05:55:51.375403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.436 [2024-12-10 05:55:51.375459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.436 [2024-12-10 05:55:51.375473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.436 [2024-12-10 05:55:51.375481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.436 [2024-12-10 05:55:51.375487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.436 [2024-12-10 05:55:51.375502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.436 qpair failed and we were unable to recover it. 00:30:33.436 [2024-12-10 05:55:51.385440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.436 [2024-12-10 05:55:51.385498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.436 [2024-12-10 05:55:51.385512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.436 [2024-12-10 05:55:51.385519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.436 [2024-12-10 05:55:51.385526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.436 [2024-12-10 05:55:51.385542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.436 qpair failed and we were unable to recover it. 00:30:33.695 [2024-12-10 05:55:51.395477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.695 [2024-12-10 05:55:51.395534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.695 [2024-12-10 05:55:51.395547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.695 [2024-12-10 05:55:51.395553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.695 [2024-12-10 05:55:51.395560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.695 [2024-12-10 05:55:51.395575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.695 qpair failed and we were unable to recover it. 00:30:33.695 [2024-12-10 05:55:51.405518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.695 [2024-12-10 05:55:51.405570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.695 [2024-12-10 05:55:51.405583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.695 [2024-12-10 05:55:51.405590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.695 [2024-12-10 05:55:51.405597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.695 [2024-12-10 05:55:51.405612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.695 qpair failed and we were unable to recover it. 00:30:33.695 [2024-12-10 05:55:51.415527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.695 [2024-12-10 05:55:51.415586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.695 [2024-12-10 05:55:51.415603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.695 [2024-12-10 05:55:51.415610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.695 [2024-12-10 05:55:51.415616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.695 [2024-12-10 05:55:51.415631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.695 qpair failed and we were unable to recover it. 00:30:33.695 [2024-12-10 05:55:51.425542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.695 [2024-12-10 05:55:51.425597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.695 [2024-12-10 05:55:51.425610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.695 [2024-12-10 05:55:51.425617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.695 [2024-12-10 05:55:51.425623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.695 [2024-12-10 05:55:51.425638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.695 qpair failed and we were unable to recover it. 00:30:33.695 [2024-12-10 05:55:51.435563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.695 [2024-12-10 05:55:51.435616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.695 [2024-12-10 05:55:51.435630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.695 [2024-12-10 05:55:51.435637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.695 [2024-12-10 05:55:51.435643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.695 [2024-12-10 05:55:51.435658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.695 qpair failed and we were unable to recover it. 00:30:33.695 [2024-12-10 05:55:51.445599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.695 [2024-12-10 05:55:51.445646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.695 [2024-12-10 05:55:51.445659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.695 [2024-12-10 05:55:51.445667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.695 [2024-12-10 05:55:51.445673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.445688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.455602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.455658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.455672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.455682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.455689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.455703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.465662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.465718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.465731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.465739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.465746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.465760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.475700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.475757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.475770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.475777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.475783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.475798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.485706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.485761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.485775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.485782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.485789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.485803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.495787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.495895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.495909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.495916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.495922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.495937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.505801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.505859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.505873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.505880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.505886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.505901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.515801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.515855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.515868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.515875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.515882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.515896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.525824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.525881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.525895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.525902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.525909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.525924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.535886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.535943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.535957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.535963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.535969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.535984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.545896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.545955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.545969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.545976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.545982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.545998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.555831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.555921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.555934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.555942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.555948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.555963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.565943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.565996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.566009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.566016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.566023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.566038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.575979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.696 [2024-12-10 05:55:51.576037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.696 [2024-12-10 05:55:51.576051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.696 [2024-12-10 05:55:51.576058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.696 [2024-12-10 05:55:51.576065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.696 [2024-12-10 05:55:51.576080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.696 qpair failed and we were unable to recover it. 00:30:33.696 [2024-12-10 05:55:51.585982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.586035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.586049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.586061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.586067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.586083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.697 [2024-12-10 05:55:51.596016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.596069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.596083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.596090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.596097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.596112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.697 [2024-12-10 05:55:51.606056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.606109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.606122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.606129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.606136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.606151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.697 [2024-12-10 05:55:51.616096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.616177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.616191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.616198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.616204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.616223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.697 [2024-12-10 05:55:51.626114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.626179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.626192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.626199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.626206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.626231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.697 [2024-12-10 05:55:51.636128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.636185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.636199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.636207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.636214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.636234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.697 [2024-12-10 05:55:51.646198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.697 [2024-12-10 05:55:51.646261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.697 [2024-12-10 05:55:51.646275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.697 [2024-12-10 05:55:51.646283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.697 [2024-12-10 05:55:51.646289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.697 [2024-12-10 05:55:51.646305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.697 qpair failed and we were unable to recover it. 00:30:33.956 [2024-12-10 05:55:51.656237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.956 [2024-12-10 05:55:51.656304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.956 [2024-12-10 05:55:51.656317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.956 [2024-12-10 05:55:51.656325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.956 [2024-12-10 05:55:51.656331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.956 [2024-12-10 05:55:51.656347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.956 qpair failed and we were unable to recover it. 00:30:33.956 [2024-12-10 05:55:51.666233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.956 [2024-12-10 05:55:51.666288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.956 [2024-12-10 05:55:51.666301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.956 [2024-12-10 05:55:51.666309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.956 [2024-12-10 05:55:51.666315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.956 [2024-12-10 05:55:51.666331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.956 qpair failed and we were unable to recover it. 00:30:33.956 [2024-12-10 05:55:51.676270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.956 [2024-12-10 05:55:51.676337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.956 [2024-12-10 05:55:51.676351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.956 [2024-12-10 05:55:51.676358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.676365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.676380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.686341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.686396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.686410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.686417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.686423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.686438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.696322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.696378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.696392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.696399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.696406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.696421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.706385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.706439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.706453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.706459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.706466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.706498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.716387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.716442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.716458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.716466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.716472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.716487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.726405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.726471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.726485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.726492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.726499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.726514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.736440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.736497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.736510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.736517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.736524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.736539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.746470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.746525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.746537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.746544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.746551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.746565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.756491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.756541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.756554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.756561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.756572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.756586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.766521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.766574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.766588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.766595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.766601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.766616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.776546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.776604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.776617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.776625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.776633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.776649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.786580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.786637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.786651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.786658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.786665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.786680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.796594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.796648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.796661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.796668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.796675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.796690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.957 [2024-12-10 05:55:51.806626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.957 [2024-12-10 05:55:51.806678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.957 [2024-12-10 05:55:51.806691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.957 [2024-12-10 05:55:51.806698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.957 [2024-12-10 05:55:51.806705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.957 [2024-12-10 05:55:51.806720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.957 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.816656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.816710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.816723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.816730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.816736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.816751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.826686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.826742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.826755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.826762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.826769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.826783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.836704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.836810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.836823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.836830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.836837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.836851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.846773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.846874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.846891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.846898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.846904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.846919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.856739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.856805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.856818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.856825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.856831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.856845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.866828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.866884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.866896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.866903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.866909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.866924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.876815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.876870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.876884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.876891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.876897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.876912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.886839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.886895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.886908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.886916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.886926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.886941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.896890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.896988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.897005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.897013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.897020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.897036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:33.958 [2024-12-10 05:55:51.906922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.958 [2024-12-10 05:55:51.907001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.958 [2024-12-10 05:55:51.907015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.958 [2024-12-10 05:55:51.907022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.958 [2024-12-10 05:55:51.907029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:33.958 [2024-12-10 05:55:51.907044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:33.958 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.916993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.917081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.917095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.917102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.917109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.917123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.927010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.927126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.927141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.927148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.927155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.927172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.937004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.937070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.937084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.937092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.937098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.937113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.947022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.947080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.947094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.947101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.947108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.947123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.957094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.957147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.957160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.957167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.957174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.957188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.967056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.967120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.967134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.967141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.967147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.967162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.977181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.977288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.977305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.977312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.977319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.977334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.987136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.987188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.987202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.987209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.987216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.987235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:51.997162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.218 [2024-12-10 05:55:51.997220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.218 [2024-12-10 05:55:51.997234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.218 [2024-12-10 05:55:51.997242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.218 [2024-12-10 05:55:51.997248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.218 [2024-12-10 05:55:51.997263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.218 qpair failed and we were unable to recover it. 00:30:34.218 [2024-12-10 05:55:52.007194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.007257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.007270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.007278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.007284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.007299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.017237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.017296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.017310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.017321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.017328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.017344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.027305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.027373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.027387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.027394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.027401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.027416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.037274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.037331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.037344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.037351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.037358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.037373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.047296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.047345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.047359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.047365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.047372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.047388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.057335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.057392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.057405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.057412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.057420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.057434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.067359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.067412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.067425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.067432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.067439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.067453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.077328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.077413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.077426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.077433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.077440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.077454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.087422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.087478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.087492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.087500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.087506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.087521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.097512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.097569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.097583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.097591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.097597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.097612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.107489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.107550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.107564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.107571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.107578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.107593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.117528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.117583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.219 [2024-12-10 05:55:52.117596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.219 [2024-12-10 05:55:52.117603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.219 [2024-12-10 05:55:52.117610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.219 [2024-12-10 05:55:52.117625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.219 qpair failed and we were unable to recover it. 00:30:34.219 [2024-12-10 05:55:52.127544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.219 [2024-12-10 05:55:52.127596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.220 [2024-12-10 05:55:52.127610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.220 [2024-12-10 05:55:52.127617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.220 [2024-12-10 05:55:52.127623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.220 [2024-12-10 05:55:52.127638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.220 qpair failed and we were unable to recover it. 00:30:34.220 [2024-12-10 05:55:52.137609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.220 [2024-12-10 05:55:52.137665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.220 [2024-12-10 05:55:52.137678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.220 [2024-12-10 05:55:52.137685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.220 [2024-12-10 05:55:52.137691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.220 [2024-12-10 05:55:52.137706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.220 qpair failed and we were unable to recover it. 00:30:34.220 [2024-12-10 05:55:52.147590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.220 [2024-12-10 05:55:52.147666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.220 [2024-12-10 05:55:52.147680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.220 [2024-12-10 05:55:52.147690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.220 [2024-12-10 05:55:52.147697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.220 [2024-12-10 05:55:52.147713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.220 qpair failed and we were unable to recover it. 00:30:34.220 [2024-12-10 05:55:52.157657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.220 [2024-12-10 05:55:52.157714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.220 [2024-12-10 05:55:52.157727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.220 [2024-12-10 05:55:52.157734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.220 [2024-12-10 05:55:52.157740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.220 [2024-12-10 05:55:52.157755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.220 qpair failed and we were unable to recover it. 00:30:34.220 [2024-12-10 05:55:52.167659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.220 [2024-12-10 05:55:52.167720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.220 [2024-12-10 05:55:52.167733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.220 [2024-12-10 05:55:52.167741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.220 [2024-12-10 05:55:52.167748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.220 [2024-12-10 05:55:52.167763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.220 qpair failed and we were unable to recover it. 00:30:34.479 [2024-12-10 05:55:52.177699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.479 [2024-12-10 05:55:52.177760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.479 [2024-12-10 05:55:52.177773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.479 [2024-12-10 05:55:52.177780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.479 [2024-12-10 05:55:52.177786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.479 [2024-12-10 05:55:52.177801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.479 qpair failed and we were unable to recover it. 00:30:34.479 [2024-12-10 05:55:52.187736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.479 [2024-12-10 05:55:52.187793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.479 [2024-12-10 05:55:52.187806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.479 [2024-12-10 05:55:52.187813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.479 [2024-12-10 05:55:52.187819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.479 [2024-12-10 05:55:52.187837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.479 qpair failed and we were unable to recover it. 00:30:34.479 [2024-12-10 05:55:52.197736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.479 [2024-12-10 05:55:52.197792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.479 [2024-12-10 05:55:52.197805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.479 [2024-12-10 05:55:52.197812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.479 [2024-12-10 05:55:52.197818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.479 [2024-12-10 05:55:52.197833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.479 qpair failed and we were unable to recover it. 00:30:34.479 [2024-12-10 05:55:52.207741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.479 [2024-12-10 05:55:52.207794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.479 [2024-12-10 05:55:52.207808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.479 [2024-12-10 05:55:52.207814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.479 [2024-12-10 05:55:52.207821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.479 [2024-12-10 05:55:52.207836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.479 qpair failed and we were unable to recover it. 00:30:34.479 [2024-12-10 05:55:52.217800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.479 [2024-12-10 05:55:52.217856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.479 [2024-12-10 05:55:52.217870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.479 [2024-12-10 05:55:52.217877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.217883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.217898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.227807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.227872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.227887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.227894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.227901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.227916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.237838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.237894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.237907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.237915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.237922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.237937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.247877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.247941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.247955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.247962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.247969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.247984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.257907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.258004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.258018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.258026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.258032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.258047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.267980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.268034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.268049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.268056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.268063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.268079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.277979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.278034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.278051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.278059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.278064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.278079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.288014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.288068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.288083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.288090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.288097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.288112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.298031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.298086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.298100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.298107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.298113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.298128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.308051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.308110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.308124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.308131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.308138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.308153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.318071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.318125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.318139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.318146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.318158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.318173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.328099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.328156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.328170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.328178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.328185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.328200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.338148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.338204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.338223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.338231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.338238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.338253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.480 [2024-12-10 05:55:52.348167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.480 [2024-12-10 05:55:52.348229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.480 [2024-12-10 05:55:52.348242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.480 [2024-12-10 05:55:52.348249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.480 [2024-12-10 05:55:52.348256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.480 [2024-12-10 05:55:52.348271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.480 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.358226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.358329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.358342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.358349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.358355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.358370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.368144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.368203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.368215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.368227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.368233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.368248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.378192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.378256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.378270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.378278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.378284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.378298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.388211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.388272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.388286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.388293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.388299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.388314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.398349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.398414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.398428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.398435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.398442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.398456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.408312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.408373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.408390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.408398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.408404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.408419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.418400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.418458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.418471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.418478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.418484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.418499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.481 [2024-12-10 05:55:52.428324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.481 [2024-12-10 05:55:52.428406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.481 [2024-12-10 05:55:52.428420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.481 [2024-12-10 05:55:52.428428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.481 [2024-12-10 05:55:52.428435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.481 [2024-12-10 05:55:52.428450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.481 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.438458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.438521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.438535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.438542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.438548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.438563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.448381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.448434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.448448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.448454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.448464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.448479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.458503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.458559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.458572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.458579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.458586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.458602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.468444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.468508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.468522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.468530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.468536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.468552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.478534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.478588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.478601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.478608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.478615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.478630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.488549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.488602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.488616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.488623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.488629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.488644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.498585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.498651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.498665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.498672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.498679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.498694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.508614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.508669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.508682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.508689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.508696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.508710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.518641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.518701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.518715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.518723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.518729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.518744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-10 05:55:52.528643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.741 [2024-12-10 05:55:52.528704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.741 [2024-12-10 05:55:52.528719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.741 [2024-12-10 05:55:52.528727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.741 [2024-12-10 05:55:52.528733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.741 [2024-12-10 05:55:52.528749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.538647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.538717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.538733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.538740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.538747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.538762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.548652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.548711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.548724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.548731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.548738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.548753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.558709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.558768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.558782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.558788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.558795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.558810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.568814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.568869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.568883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.568890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.568896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.568911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.578803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.578861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.578875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.578885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.578892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.578906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.588805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.588891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.588904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.588912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.588918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.588932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.598778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.598834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.598848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.598855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.598862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.598876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.608912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.608966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.608979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.608986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.608992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.609006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.618949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.619006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.619020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.619027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.619033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.619052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.628991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.629063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.629077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.629084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.629091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.629106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.638978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.639029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.639043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.639049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.639056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.639071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.648933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.648990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.649003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.649010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.649016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.649032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.659028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.659081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.742 [2024-12-10 05:55:52.659094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.742 [2024-12-10 05:55:52.659101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.742 [2024-12-10 05:55:52.659108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.742 [2024-12-10 05:55:52.659123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-10 05:55:52.668990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.742 [2024-12-10 05:55:52.669069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.743 [2024-12-10 05:55:52.669082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.743 [2024-12-10 05:55:52.669090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.743 [2024-12-10 05:55:52.669096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.743 [2024-12-10 05:55:52.669111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-10 05:55:52.679076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.743 [2024-12-10 05:55:52.679130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.743 [2024-12-10 05:55:52.679143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.743 [2024-12-10 05:55:52.679150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.743 [2024-12-10 05:55:52.679158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.743 [2024-12-10 05:55:52.679173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-10 05:55:52.689138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.743 [2024-12-10 05:55:52.689223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.743 [2024-12-10 05:55:52.689238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.743 [2024-12-10 05:55:52.689245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.743 [2024-12-10 05:55:52.689251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:34.743 [2024-12-10 05:55:52.689267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.743 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-10 05:55:52.699170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.002 [2024-12-10 05:55:52.699247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.002 [2024-12-10 05:55:52.699261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.002 [2024-12-10 05:55:52.699269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.002 [2024-12-10 05:55:52.699276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.002 [2024-12-10 05:55:52.699291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-10 05:55:52.709168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.002 [2024-12-10 05:55:52.709232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.002 [2024-12-10 05:55:52.709245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.709255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.709262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.709277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.719207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.719266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.719280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.719288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.719294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.719309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.729222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.729278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.729292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.729298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.729305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.729320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.739260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.739317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.739330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.739337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.739343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.739358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.749274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.749348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.749362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.749369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.749375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.749395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.759303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.759356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.759369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.759376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.759383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.759398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.769326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.769382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.769395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.769402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.769409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.769424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.779366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.779423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.779437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.779446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.779454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.779471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.789384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.789485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.789499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.789506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.789512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.789527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.799482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.799567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.799580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.799588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.799594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.799609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.809481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.809535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.809548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.809555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.809561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.809577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.819489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.819545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.819559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.819566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.819572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.819587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-10 05:55:52.829516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.003 [2024-12-10 05:55:52.829575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.003 [2024-12-10 05:55:52.829588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.003 [2024-12-10 05:55:52.829595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.003 [2024-12-10 05:55:52.829601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.003 [2024-12-10 05:55:52.829616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.839534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.839589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.839605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.839612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.839618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.839633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.849561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.849614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.849627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.849634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.849640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.849655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.859690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.859770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.859784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.859791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.859798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.859813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.869628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.869686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.869699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.869706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.869712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.869727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.879666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.879721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.879735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.879742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.879752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.879767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.889683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.889738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.889752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.889759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.889766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.889781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.899675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.899766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.899780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.899787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.899794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.899809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.909760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.909815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.909828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.909835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.909842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.909857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.919799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.919862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.919876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.919883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.919890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.919904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.929785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.929841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.929855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.929862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.929869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.929885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.939824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.939921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.939934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.939941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.939947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.939962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-10 05:55:52.949773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.004 [2024-12-10 05:55:52.949830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.004 [2024-12-10 05:55:52.949843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.004 [2024-12-10 05:55:52.949850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.004 [2024-12-10 05:55:52.949856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.004 [2024-12-10 05:55:52.949872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.263 [2024-12-10 05:55:52.959934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.263 [2024-12-10 05:55:52.959997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.263 [2024-12-10 05:55:52.960011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.263 [2024-12-10 05:55:52.960020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.263 [2024-12-10 05:55:52.960027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.263 [2024-12-10 05:55:52.960043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.263 qpair failed and we were unable to recover it. 00:30:35.263 [2024-12-10 05:55:52.969903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.263 [2024-12-10 05:55:52.969959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.263 [2024-12-10 05:55:52.969976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.263 [2024-12-10 05:55:52.969983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.263 [2024-12-10 05:55:52.969989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.263 [2024-12-10 05:55:52.970005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.263 qpair failed and we were unable to recover it. 00:30:35.263 [2024-12-10 05:55:52.979951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.263 [2024-12-10 05:55:52.980011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.263 [2024-12-10 05:55:52.980024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.263 [2024-12-10 05:55:52.980031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.263 [2024-12-10 05:55:52.980037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.263 [2024-12-10 05:55:52.980052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.263 qpair failed and we were unable to recover it. 00:30:35.263 [2024-12-10 05:55:52.989959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.263 [2024-12-10 05:55:52.990014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.263 [2024-12-10 05:55:52.990028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.263 [2024-12-10 05:55:52.990035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.263 [2024-12-10 05:55:52.990042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.263 [2024-12-10 05:55:52.990056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.263 qpair failed and we were unable to recover it. 00:30:35.263 [2024-12-10 05:55:52.999984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.263 [2024-12-10 05:55:53.000040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.263 [2024-12-10 05:55:53.000054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.263 [2024-12-10 05:55:53.000062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.263 [2024-12-10 05:55:53.000069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.263 [2024-12-10 05:55:53.000084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.263 qpair failed and we were unable to recover it. 00:30:35.263 [2024-12-10 05:55:53.010046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.263 [2024-12-10 05:55:53.010100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.263 [2024-12-10 05:55:53.010113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.263 [2024-12-10 05:55:53.010121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.263 [2024-12-10 05:55:53.010132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.264 [2024-12-10 05:55:53.010147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.264 qpair failed and we were unable to recover it. 00:30:35.264 [2024-12-10 05:55:53.020049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.264 [2024-12-10 05:55:53.020104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.264 [2024-12-10 05:55:53.020118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.264 [2024-12-10 05:55:53.020126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.264 [2024-12-10 05:55:53.020133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.264 [2024-12-10 05:55:53.020148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.264 qpair failed and we were unable to recover it. 00:30:35.264 [2024-12-10 05:55:53.030111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.264 [2024-12-10 05:55:53.030171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.264 [2024-12-10 05:55:53.030185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.264 [2024-12-10 05:55:53.030192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.264 [2024-12-10 05:55:53.030198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.264 [2024-12-10 05:55:53.030213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.264 qpair failed and we were unable to recover it. 00:30:35.264 [2024-12-10 05:55:53.040140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.264 [2024-12-10 05:55:53.040213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.264 [2024-12-10 05:55:53.040231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.264 [2024-12-10 05:55:53.040238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.264 [2024-12-10 05:55:53.040245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.264 [2024-12-10 05:55:53.040260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.264 qpair failed and we were unable to recover it. 00:30:35.264 [2024-12-10 05:55:53.050124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.264 [2024-12-10 05:55:53.050183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.264 [2024-12-10 05:55:53.050196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.264 [2024-12-10 05:55:53.050204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.264 [2024-12-10 05:55:53.050210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1450000b90 00:30:35.264 [2024-12-10 05:55:53.050229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.264 qpair failed and we were unable to recover it. 00:30:35.264 [2024-12-10 05:55:53.050392] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:35.264 A controller has encountered a failure and is being reset. 00:30:35.264 Controller properly reset. 00:30:35.264 Initializing NVMe Controllers 00:30:35.264 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:35.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:35.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:35.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:35.264 Initialization complete. Launching workers. 00:30:35.264 Starting thread on core 1 00:30:35.264 Starting thread on core 2 00:30:35.264 Starting thread on core 3 00:30:35.264 Starting thread on core 0 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:35.264 00:30:35.264 real 0m11.408s 00:30:35.264 user 0m21.885s 00:30:35.264 sys 0m4.744s 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.264 ************************************ 00:30:35.264 END TEST nvmf_target_disconnect_tc2 00:30:35.264 ************************************ 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.264 rmmod nvme_tcp 00:30:35.264 rmmod nvme_fabrics 00:30:35.264 rmmod nvme_keyring 00:30:35.264 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.523 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:35.523 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:35.523 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 307887 ']' 00:30:35.523 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 307887 00:30:35.523 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 307887 ']' 00:30:35.523 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 307887 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 307887 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 307887' 00:30:35.524 killing process with pid 307887 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 307887 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 307887 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.524 05:55:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.060 05:55:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.060 00:30:38.060 real 0m20.997s 00:30:38.060 user 0m49.725s 00:30:38.060 sys 0m10.271s 00:30:38.060 05:55:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.060 05:55:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:38.060 ************************************ 00:30:38.060 END TEST nvmf_target_disconnect 00:30:38.060 ************************************ 00:30:38.060 05:55:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:38.060 00:30:38.060 real 6m10.571s 00:30:38.060 user 10m49.366s 00:30:38.060 sys 2m8.631s 00:30:38.060 05:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.060 05:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.060 ************************************ 00:30:38.061 END TEST nvmf_host 00:30:38.061 ************************************ 00:30:38.061 05:55:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:38.061 05:55:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:38.061 05:55:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:38.061 05:55:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.061 05:55:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.061 05:55:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.061 ************************************ 00:30:38.061 START TEST nvmf_target_core_interrupt_mode 00:30:38.061 ************************************ 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:38.061 * Looking for test storage... 00:30:38.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.061 --rc genhtml_branch_coverage=1 00:30:38.061 --rc genhtml_function_coverage=1 00:30:38.061 --rc genhtml_legend=1 00:30:38.061 --rc geninfo_all_blocks=1 00:30:38.061 --rc geninfo_unexecuted_blocks=1 00:30:38.061 00:30:38.061 ' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.061 --rc genhtml_branch_coverage=1 00:30:38.061 --rc genhtml_function_coverage=1 00:30:38.061 --rc genhtml_legend=1 00:30:38.061 --rc geninfo_all_blocks=1 00:30:38.061 --rc geninfo_unexecuted_blocks=1 00:30:38.061 00:30:38.061 ' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.061 --rc genhtml_branch_coverage=1 00:30:38.061 --rc genhtml_function_coverage=1 00:30:38.061 --rc genhtml_legend=1 00:30:38.061 --rc geninfo_all_blocks=1 00:30:38.061 --rc geninfo_unexecuted_blocks=1 00:30:38.061 00:30:38.061 ' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.061 --rc genhtml_branch_coverage=1 00:30:38.061 --rc genhtml_function_coverage=1 00:30:38.061 --rc genhtml_legend=1 00:30:38.061 --rc geninfo_all_blocks=1 00:30:38.061 --rc geninfo_unexecuted_blocks=1 00:30:38.061 00:30:38.061 ' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.061 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.062 ************************************ 00:30:38.062 START TEST nvmf_abort 00:30:38.062 ************************************ 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:38.062 * Looking for test storage... 00:30:38.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.062 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.322 --rc genhtml_branch_coverage=1 00:30:38.322 --rc genhtml_function_coverage=1 00:30:38.322 --rc genhtml_legend=1 00:30:38.322 --rc geninfo_all_blocks=1 00:30:38.322 --rc geninfo_unexecuted_blocks=1 00:30:38.322 00:30:38.322 ' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.322 --rc genhtml_branch_coverage=1 00:30:38.322 --rc genhtml_function_coverage=1 00:30:38.322 --rc genhtml_legend=1 00:30:38.322 --rc geninfo_all_blocks=1 00:30:38.322 --rc geninfo_unexecuted_blocks=1 00:30:38.322 00:30:38.322 ' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.322 --rc genhtml_branch_coverage=1 00:30:38.322 --rc genhtml_function_coverage=1 00:30:38.322 --rc genhtml_legend=1 00:30:38.322 --rc geninfo_all_blocks=1 00:30:38.322 --rc geninfo_unexecuted_blocks=1 00:30:38.322 00:30:38.322 ' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.322 --rc genhtml_branch_coverage=1 00:30:38.322 --rc genhtml_function_coverage=1 00:30:38.322 --rc genhtml_legend=1 00:30:38.322 --rc geninfo_all_blocks=1 00:30:38.322 --rc geninfo_unexecuted_blocks=1 00:30:38.322 00:30:38.322 ' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.322 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.323 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:44.896 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:44.896 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:44.896 Found net devices under 0000:af:00.0: cvl_0_0 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:44.896 Found net devices under 0000:af:00.1: cvl_0_1 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.896 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:30:44.897 00:30:44.897 --- 10.0.0.2 ping statistics --- 00:30:44.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.897 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:30:44.897 00:30:44.897 --- 10.0.0.1 ping statistics --- 00:30:44.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.897 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=312958 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 312958 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 312958 ']' 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.897 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.897 [2024-12-10 05:56:02.785415] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.897 [2024-12-10 05:56:02.786303] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:30:44.897 [2024-12-10 05:56:02.786336] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.156 [2024-12-10 05:56:02.868937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:45.156 [2024-12-10 05:56:02.908757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.156 [2024-12-10 05:56:02.908793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.156 [2024-12-10 05:56:02.908800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.156 [2024-12-10 05:56:02.908806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.156 [2024-12-10 05:56:02.908811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.156 [2024-12-10 05:56:02.910087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.156 [2024-12-10 05:56:02.910196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.156 [2024-12-10 05:56:02.910197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.156 [2024-12-10 05:56:02.978248] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.156 [2024-12-10 05:56:02.979132] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:45.156 [2024-12-10 05:56:02.979345] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.156 [2024-12-10 05:56:02.979501] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.156 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.157 [2024-12-10 05:56:03.046975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.157 Malloc0 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.157 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.415 Delay0 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.415 [2024-12-10 05:56:03.138936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.415 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:45.415 [2024-12-10 05:56:03.221830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:47.946 Initializing NVMe Controllers 00:30:47.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:47.946 controller IO queue size 128 less than required 00:30:47.946 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:47.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:47.946 Initialization complete. Launching workers. 00:30:47.946 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37915 00:30:47.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37976, failed to submit 66 00:30:47.946 success 37915, unsuccessful 61, failed 0 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.946 rmmod nvme_tcp 00:30:47.946 rmmod nvme_fabrics 00:30:47.946 rmmod nvme_keyring 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 312958 ']' 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 312958 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 312958 ']' 00:30:47.946 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 312958 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 312958 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 312958' 00:30:47.947 killing process with pid 312958 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 312958 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 312958 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.947 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.851 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.851 00:30:49.851 real 0m11.924s 00:30:49.851 user 0m10.886s 00:30:49.851 sys 0m6.193s 00:30:49.851 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.851 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.851 ************************************ 00:30:49.851 END TEST nvmf_abort 00:30:49.851 ************************************ 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:50.111 ************************************ 00:30:50.111 START TEST nvmf_ns_hotplug_stress 00:30:50.111 ************************************ 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:50.111 * Looking for test storage... 00:30:50.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:50.111 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.111 --rc genhtml_branch_coverage=1 00:30:50.111 --rc genhtml_function_coverage=1 00:30:50.111 --rc genhtml_legend=1 00:30:50.111 --rc geninfo_all_blocks=1 00:30:50.111 --rc geninfo_unexecuted_blocks=1 00:30:50.111 00:30:50.111 ' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.111 --rc genhtml_branch_coverage=1 00:30:50.111 --rc genhtml_function_coverage=1 00:30:50.111 --rc genhtml_legend=1 00:30:50.111 --rc geninfo_all_blocks=1 00:30:50.111 --rc geninfo_unexecuted_blocks=1 00:30:50.111 00:30:50.111 ' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.111 --rc genhtml_branch_coverage=1 00:30:50.111 --rc genhtml_function_coverage=1 00:30:50.111 --rc genhtml_legend=1 00:30:50.111 --rc geninfo_all_blocks=1 00:30:50.111 --rc geninfo_unexecuted_blocks=1 00:30:50.111 00:30:50.111 ' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.111 --rc genhtml_branch_coverage=1 00:30:50.111 --rc genhtml_function_coverage=1 00:30:50.111 --rc genhtml_legend=1 00:30:50.111 --rc geninfo_all_blocks=1 00:30:50.111 --rc geninfo_unexecuted_blocks=1 00:30:50.111 00:30:50.111 ' 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.111 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.371 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.372 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:56.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:56.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:56.941 Found net devices under 0000:af:00.0: cvl_0_0 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:56.941 Found net devices under 0000:af:00.1: cvl_0_1 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.941 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:30:56.942 00:30:56.942 --- 10.0.0.2 ping statistics --- 00:30:56.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.942 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:30:56.942 00:30:56.942 --- 10.0.0.1 ping statistics --- 00:30:56.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.942 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=317254 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 317254 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 317254 ']' 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.942 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.942 [2024-12-10 05:56:14.808843] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.942 [2024-12-10 05:56:14.809761] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:30:56.942 [2024-12-10 05:56:14.809795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.201 [2024-12-10 05:56:14.894897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.201 [2024-12-10 05:56:14.934342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.201 [2024-12-10 05:56:14.934377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.201 [2024-12-10 05:56:14.934384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.201 [2024-12-10 05:56:14.934390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.201 [2024-12-10 05:56:14.934395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.201 [2024-12-10 05:56:14.935775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.201 [2024-12-10 05:56:14.935886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.201 [2024-12-10 05:56:14.935886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.201 [2024-12-10 05:56:15.002334] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.201 [2024-12-10 05:56:15.003224] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:57.201 [2024-12-10 05:56:15.003679] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:57.201 [2024-12-10 05:56:15.003760] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:57.773 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:58.031 [2024-12-10 05:56:15.840643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.031 05:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:58.290 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.290 [2024-12-10 05:56:16.237177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.547 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:58.548 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:58.806 Malloc0 00:30:58.806 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:59.064 Delay0 00:30:59.064 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.321 05:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:59.321 NULL1 00:30:59.321 05:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:59.578 05:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=317736 00:30:59.578 05:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:59.578 05:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:30:59.578 05:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.946 Read completed with error (sct=0, sc=11) 00:31:00.946 05:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.946 05:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:00.946 05:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:01.203 true 00:31:01.203 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:01.203 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.133 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.133 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:02.133 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:02.391 true 00:31:02.391 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:02.391 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.648 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.905 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:02.905 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:02.905 true 00:31:02.905 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:02.905 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.275 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.275 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:04.275 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:04.533 true 00:31:04.533 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:04.533 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.464 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.464 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:05.464 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:05.721 true 00:31:05.721 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:05.721 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.978 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.978 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:05.978 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:06.235 true 00:31:06.235 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:06.235 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:07.623 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:07.880 true 00:31:07.880 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:07.880 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.810 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.810 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:08.810 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:09.067 true 00:31:09.067 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:09.067 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.324 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.324 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:09.324 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:09.582 true 00:31:09.582 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:09.582 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.951 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.951 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:10.951 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:11.208 true 00:31:11.208 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:11.208 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:12.140 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.140 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:12.140 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:12.397 true 00:31:12.397 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:12.397 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.654 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.912 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:12.912 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:12.912 true 00:31:12.912 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:12.912 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.297 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:14.297 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:14.554 true 00:31:14.554 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:14.554 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.485 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.485 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:15.485 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:15.742 true 00:31:15.742 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:15.742 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.998 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.998 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:15.999 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:16.255 true 00:31:16.255 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:16.255 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.623 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:17.623 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:17.879 true 00:31:17.879 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:17.879 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.808 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.808 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:18.808 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:19.065 true 00:31:19.065 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:19.065 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.321 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.578 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:19.578 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:19.578 true 00:31:19.578 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:19.578 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.946 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.946 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:20.946 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:20.946 true 00:31:21.202 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:21.202 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.202 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.459 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:21.459 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:21.715 true 00:31:21.715 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:21.715 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.083 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:23.083 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:23.339 true 00:31:23.339 05:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:23.339 05:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.269 05:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.270 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:24.270 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:24.527 true 00:31:24.527 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:24.527 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.784 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.784 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:24.784 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:25.041 true 00:31:25.041 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:25.041 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.411 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:26.411 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:26.668 true 00:31:26.668 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:26.668 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.598 05:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.598 05:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:27.598 05:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:27.854 true 00:31:27.854 05:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:27.854 05:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.175 05:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.175 05:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:28.175 05:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:28.526 true 00:31:28.526 05:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:28.526 05:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.482 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.739 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:29.739 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:29.996 true 00:31:29.996 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:29.996 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.926 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.926 Initializing NVMe Controllers 00:31:30.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.926 Controller IO queue size 128, less than required. 00:31:30.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:30.926 Controller IO queue size 128, less than required. 00:31:30.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:30.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:30.926 Initialization complete. Launching workers. 00:31:30.926 ======================================================== 00:31:30.926 Latency(us) 00:31:30.926 Device Information : IOPS MiB/s Average min max 00:31:30.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2328.03 1.14 40211.93 2551.20 1013080.90 00:31:30.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18720.53 9.14 6837.30 1589.36 442086.27 00:31:30.926 ======================================================== 00:31:30.926 Total : 21048.57 10.28 10528.63 1589.36 1013080.90 00:31:30.926 00:31:30.926 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:30.926 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:31.183 true 00:31:31.183 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 317736 00:31:31.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (317736) - No such process 00:31:31.183 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 317736 00:31:31.183 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.440 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.440 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:31.440 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:31.440 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:31.440 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:31.440 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:31.698 null0 00:31:31.698 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:31.698 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:31.698 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:31.956 null1 00:31:31.956 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:31.956 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:31.956 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:31.956 null2 00:31:32.215 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:32.215 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:32.215 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:32.215 null3 00:31:32.215 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:32.215 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:32.215 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:32.473 null4 00:31:32.473 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:32.473 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:32.473 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:32.731 null5 00:31:32.731 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:32.731 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:32.731 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:32.731 null6 00:31:32.731 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:32.731 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:32.731 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:32.989 null7 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.989 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 323031 323034 323036 323039 323042 323045 323048 323050 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.990 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.249 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.507 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.508 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.765 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.765 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.765 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.766 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:34.023 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:34.023 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:34.023 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.023 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.023 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.024 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:34.024 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.024 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.282 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:34.540 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.798 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.799 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.057 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:35.315 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:35.572 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.830 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:36.088 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.345 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.603 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:36.861 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:36.862 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.120 rmmod nvme_tcp 00:31:37.120 rmmod nvme_fabrics 00:31:37.120 rmmod nvme_keyring 00:31:37.120 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 317254 ']' 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 317254 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 317254 ']' 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 317254 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317254 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317254' 00:31:37.379 killing process with pid 317254 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 317254 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 317254 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.379 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.917 00:31:39.917 real 0m49.504s 00:31:39.917 user 3m0.502s 00:31:39.917 sys 0m20.543s 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:39.917 ************************************ 00:31:39.917 END TEST nvmf_ns_hotplug_stress 00:31:39.917 ************************************ 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:39.917 ************************************ 00:31:39.917 START TEST nvmf_delete_subsystem 00:31:39.917 ************************************ 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:39.917 * Looking for test storage... 00:31:39.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.917 --rc genhtml_branch_coverage=1 00:31:39.917 --rc genhtml_function_coverage=1 00:31:39.917 --rc genhtml_legend=1 00:31:39.917 --rc geninfo_all_blocks=1 00:31:39.917 --rc geninfo_unexecuted_blocks=1 00:31:39.917 00:31:39.917 ' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.917 --rc genhtml_branch_coverage=1 00:31:39.917 --rc genhtml_function_coverage=1 00:31:39.917 --rc genhtml_legend=1 00:31:39.917 --rc geninfo_all_blocks=1 00:31:39.917 --rc geninfo_unexecuted_blocks=1 00:31:39.917 00:31:39.917 ' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.917 --rc genhtml_branch_coverage=1 00:31:39.917 --rc genhtml_function_coverage=1 00:31:39.917 --rc genhtml_legend=1 00:31:39.917 --rc geninfo_all_blocks=1 00:31:39.917 --rc geninfo_unexecuted_blocks=1 00:31:39.917 00:31:39.917 ' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.917 --rc genhtml_branch_coverage=1 00:31:39.917 --rc genhtml_function_coverage=1 00:31:39.917 --rc genhtml_legend=1 00:31:39.917 --rc geninfo_all_blocks=1 00:31:39.917 --rc geninfo_unexecuted_blocks=1 00:31:39.917 00:31:39.917 ' 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.917 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.918 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:46.491 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:46.491 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.491 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:46.492 Found net devices under 0000:af:00.0: cvl_0_0 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:46.492 Found net devices under 0000:af:00.1: cvl_0_1 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:31:46.492 00:31:46.492 --- 10.0.0.2 ping statistics --- 00:31:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.492 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:31:46.492 00:31:46.492 --- 10.0.0.1 ping statistics --- 00:31:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.492 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.492 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=327979 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 327979 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 327979 ']' 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.752 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:46.752 [2024-12-10 05:57:04.542791] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.752 [2024-12-10 05:57:04.543697] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:31:46.752 [2024-12-10 05:57:04.543731] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.752 [2024-12-10 05:57:04.626292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:46.752 [2024-12-10 05:57:04.664010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.752 [2024-12-10 05:57:04.664041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.752 [2024-12-10 05:57:04.664048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.752 [2024-12-10 05:57:04.664054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.752 [2024-12-10 05:57:04.664060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.752 [2024-12-10 05:57:04.665253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.752 [2024-12-10 05:57:04.665254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.011 [2024-12-10 05:57:04.733313] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:47.011 [2024-12-10 05:57:04.733775] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:47.011 [2024-12-10 05:57:04.734030] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 [2024-12-10 05:57:04.814046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 [2024-12-10 05:57:04.842401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 NULL1 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 Delay0 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=328000 00:31:47.011 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:47.012 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:47.012 [2024-12-10 05:57:04.954378] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:49.541 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.541 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.541 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 starting I/O failed: -6 00:31:49.541 Write completed with error (sct=0, sc=8) 00:31:49.541 Write completed with error (sct=0, sc=8) 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 Write completed with error (sct=0, sc=8) 00:31:49.541 starting I/O failed: -6 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 starting I/O failed: -6 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 Write completed with error (sct=0, sc=8) 00:31:49.541 Read completed with error (sct=0, sc=8) 00:31:49.541 starting I/O failed: -6 00:31:49.541 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 [2024-12-10 05:57:07.041480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4fbc00d060 is same with the state(6) to be set 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 starting I/O failed: -6 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 [2024-12-10 05:57:07.042095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16780 is same with the state(6) to be set 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Read completed with error (sct=0, sc=8) 00:31:49.542 Write completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Write completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 Read completed with error (sct=0, sc=8) 00:31:49.543 [2024-12-10 05:57:07.042308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4fbc000c80 is same with the state(6) to be set 00:31:50.108 [2024-12-10 05:57:08.009378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe179b0 is same with the state(6) to be set 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.108 Write completed with error (sct=0, sc=8) 00:31:50.108 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 [2024-12-10 05:57:08.044169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16960 is same with the state(6) to be set 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 [2024-12-10 05:57:08.044326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16b40 is same with the state(6) to be set 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 [2024-12-10 05:57:08.044430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4fbc00d6c0 is same with the state(6) to be set 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Read completed with error (sct=0, sc=8) 00:31:50.109 Write completed with error (sct=0, sc=8) 00:31:50.109 [2024-12-10 05:57:08.044925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe162c0 is same with the state(6) to be set 00:31:50.109 Initializing NVMe Controllers 00:31:50.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:50.109 Controller IO queue size 128, less than required. 00:31:50.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:50.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:50.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:50.109 Initialization complete. Launching workers. 00:31:50.109 ======================================================== 00:31:50.109 Latency(us) 00:31:50.109 Device Information : IOPS MiB/s Average min max 00:31:50.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.61 0.09 949205.24 518.67 1011373.04 00:31:50.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.89 0.07 893414.69 320.55 1011360.85 00:31:50.109 ======================================================== 00:31:50.109 Total : 339.50 0.17 924409.44 320.55 1011373.04 00:31:50.109 00:31:50.109 [2024-12-10 05:57:08.045580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe179b0 (9): Bad file descriptor 00:31:50.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:50.109 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.109 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:50.109 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 328000 00:31:50.109 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 328000 00:31:50.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (328000) - No such process 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 328000 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 328000 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 328000 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.677 [2024-12-10 05:57:08.574323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=329063 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:50.677 05:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:50.935 [2024-12-10 05:57:08.656539] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:51.192 05:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:51.192 05:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:51.193 05:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:51.757 05:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:51.757 05:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:51.757 05:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:52.322 05:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:52.322 05:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:52.322 05:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:52.887 05:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:52.887 05:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:52.887 05:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:53.452 05:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:53.452 05:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:53.452 05:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:53.710 05:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:53.710 05:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:53.710 05:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:53.970 Initializing NVMe Controllers 00:31:53.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.970 Controller IO queue size 128, less than required. 00:31:53.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:53.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:53.971 Initialization complete. Launching workers. 00:31:53.971 ======================================================== 00:31:53.971 Latency(us) 00:31:53.971 Device Information : IOPS MiB/s Average min max 00:31:53.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002183.53 1000148.21 1041151.41 00:31:53.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004299.97 1000244.61 1041284.03 00:31:53.971 ======================================================== 00:31:53.971 Total : 256.00 0.12 1003241.75 1000148.21 1041284.03 00:31:53.971 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 329063 00:31:54.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (329063) - No such process 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 329063 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.229 rmmod nvme_tcp 00:31:54.229 rmmod nvme_fabrics 00:31:54.229 rmmod nvme_keyring 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:54.229 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 327979 ']' 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 327979 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 327979 ']' 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 327979 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327979 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327979' 00:31:54.489 killing process with pid 327979 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 327979 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 327979 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.489 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.121 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.121 00:31:57.122 real 0m17.025s 00:31:57.122 user 0m26.384s 00:31:57.122 sys 0m6.557s 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:57.122 ************************************ 00:31:57.122 END TEST nvmf_delete_subsystem 00:31:57.122 ************************************ 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:57.122 ************************************ 00:31:57.122 START TEST nvmf_host_management 00:31:57.122 ************************************ 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:57.122 * Looking for test storage... 00:31:57.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.122 --rc genhtml_branch_coverage=1 00:31:57.122 --rc genhtml_function_coverage=1 00:31:57.122 --rc genhtml_legend=1 00:31:57.122 --rc geninfo_all_blocks=1 00:31:57.122 --rc geninfo_unexecuted_blocks=1 00:31:57.122 00:31:57.122 ' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.122 --rc genhtml_branch_coverage=1 00:31:57.122 --rc genhtml_function_coverage=1 00:31:57.122 --rc genhtml_legend=1 00:31:57.122 --rc geninfo_all_blocks=1 00:31:57.122 --rc geninfo_unexecuted_blocks=1 00:31:57.122 00:31:57.122 ' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.122 --rc genhtml_branch_coverage=1 00:31:57.122 --rc genhtml_function_coverage=1 00:31:57.122 --rc genhtml_legend=1 00:31:57.122 --rc geninfo_all_blocks=1 00:31:57.122 --rc geninfo_unexecuted_blocks=1 00:31:57.122 00:31:57.122 ' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.122 --rc genhtml_branch_coverage=1 00:31:57.122 --rc genhtml_function_coverage=1 00:31:57.122 --rc genhtml_legend=1 00:31:57.122 --rc geninfo_all_blocks=1 00:31:57.122 --rc geninfo_unexecuted_blocks=1 00:31:57.122 00:31:57.122 ' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.122 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.123 05:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:03.694 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.694 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:03.695 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:03.695 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:03.695 Found net devices under 0000:af:00.0: cvl_0_0 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:03.695 Found net devices under 0000:af:00.1: cvl_0_1 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.695 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:32:03.696 00:32:03.696 --- 10.0.0.2 ping statistics --- 00:32:03.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.696 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:32:03.696 00:32:03.696 --- 10.0.0.1 ping statistics --- 00:32:03.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.696 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=333522 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 333522 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 333522 ']' 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.696 05:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:03.696 [2024-12-10 05:57:21.616393] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:03.696 [2024-12-10 05:57:21.617287] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:03.696 [2024-12-10 05:57:21.617320] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.956 [2024-12-10 05:57:21.701672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:03.956 [2024-12-10 05:57:21.742809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.956 [2024-12-10 05:57:21.742848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.956 [2024-12-10 05:57:21.742855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.956 [2024-12-10 05:57:21.742861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.956 [2024-12-10 05:57:21.742866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.956 [2024-12-10 05:57:21.744451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.956 [2024-12-10 05:57:21.744582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:03.956 [2024-12-10 05:57:21.744689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.956 [2024-12-10 05:57:21.744690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:03.956 [2024-12-10 05:57:21.812697] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:03.956 [2024-12-10 05:57:21.813757] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:03.956 [2024-12-10 05:57:21.813831] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:03.956 [2024-12-10 05:57:21.814263] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:03.956 [2024-12-10 05:57:21.814296] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:04.524 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.524 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:04.524 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:04.524 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:04.524 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.782 [2024-12-10 05:57:22.489375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.782 Malloc0 00:32:04.782 [2024-12-10 05:57:22.581660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=333647 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 333647 /var/tmp/bdevperf.sock 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 333647 ']' 00:32:04.782 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:04.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:04.783 { 00:32:04.783 "params": { 00:32:04.783 "name": "Nvme$subsystem", 00:32:04.783 "trtype": "$TEST_TRANSPORT", 00:32:04.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.783 "adrfam": "ipv4", 00:32:04.783 "trsvcid": "$NVMF_PORT", 00:32:04.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.783 "hdgst": ${hdgst:-false}, 00:32:04.783 "ddgst": ${ddgst:-false} 00:32:04.783 }, 00:32:04.783 "method": "bdev_nvme_attach_controller" 00:32:04.783 } 00:32:04.783 EOF 00:32:04.783 )") 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:04.783 05:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:04.783 "params": { 00:32:04.783 "name": "Nvme0", 00:32:04.783 "trtype": "tcp", 00:32:04.783 "traddr": "10.0.0.2", 00:32:04.783 "adrfam": "ipv4", 00:32:04.783 "trsvcid": "4420", 00:32:04.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.783 "hdgst": false, 00:32:04.783 "ddgst": false 00:32:04.783 }, 00:32:04.783 "method": "bdev_nvme_attach_controller" 00:32:04.783 }' 00:32:04.783 [2024-12-10 05:57:22.679951] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:04.783 [2024-12-10 05:57:22.680004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333647 ] 00:32:05.040 [2024-12-10 05:57:22.762712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.040 [2024-12-10 05:57:22.802451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.297 Running I/O for 10 seconds... 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:32:05.297 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.556 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.556 [2024-12-10 05:57:23.479329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.556 [2024-12-10 05:57:23.479833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.556 [2024-12-10 05:57:23.479841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.479986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.479994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.557 [2024-12-10 05:57:23.480337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.557 [2024-12-10 05:57:23.480344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cb50 is same with the state(6) to be set 00:32:05.557 [2024-12-10 05:57:23.481302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:05.557 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.557 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:05.557 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.557 task offset: 106368 on job bdev=Nvme0n1 fails 00:32:05.557 00:32:05.557 Latency(us) 00:32:05.557 [2024-12-10T04:57:23.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.557 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:05.557 Job: Nvme0n1 ended in about 0.40 seconds with error 00:32:05.557 Verification LBA range: start 0x0 length 0x400 00:32:05.557 Nvme0n1 : 0.40 1910.40 119.40 159.20 0.00 30102.06 4462.69 27088.21 00:32:05.557 [2024-12-10T04:57:23.516Z] =================================================================================================================== 00:32:05.557 [2024-12-10T04:57:23.516Z] Total : 1910.40 119.40 159.20 0.00 30102.06 4462.69 27088.21 00:32:05.557 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.557 [2024-12-10 05:57:23.483725] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:05.557 [2024-12-10 05:57:23.483747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1833b20 (9): Bad file descriptor 00:32:05.557 [2024-12-10 05:57:23.484774] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:05.557 [2024-12-10 05:57:23.484886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:05.557 [2024-12-10 05:57:23.484908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.558 [2024-12-10 05:57:23.484919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:05.558 [2024-12-10 05:57:23.484930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:05.558 [2024-12-10 05:57:23.484937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.558 [2024-12-10 05:57:23.484944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1833b20 00:32:05.558 [2024-12-10 05:57:23.484962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1833b20 (9): Bad file descriptor 00:32:05.558 [2024-12-10 05:57:23.484974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:05.558 [2024-12-10 05:57:23.484982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:05.558 [2024-12-10 05:57:23.484990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:05.558 [2024-12-10 05:57:23.484998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:05.558 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.558 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 333647 00:32:06.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (333647) - No such process 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.926 { 00:32:06.926 "params": { 00:32:06.926 "name": "Nvme$subsystem", 00:32:06.926 "trtype": "$TEST_TRANSPORT", 00:32:06.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.926 "adrfam": "ipv4", 00:32:06.926 "trsvcid": "$NVMF_PORT", 00:32:06.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.926 "hdgst": ${hdgst:-false}, 00:32:06.926 "ddgst": ${ddgst:-false} 00:32:06.926 }, 00:32:06.926 "method": "bdev_nvme_attach_controller" 00:32:06.926 } 00:32:06.926 EOF 00:32:06.926 )") 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:06.926 05:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.926 "params": { 00:32:06.926 "name": "Nvme0", 00:32:06.926 "trtype": "tcp", 00:32:06.926 "traddr": "10.0.0.2", 00:32:06.926 "adrfam": "ipv4", 00:32:06.926 "trsvcid": "4420", 00:32:06.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.926 "hdgst": false, 00:32:06.926 "ddgst": false 00:32:06.926 }, 00:32:06.926 "method": "bdev_nvme_attach_controller" 00:32:06.926 }' 00:32:06.926 [2024-12-10 05:57:24.541642] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:06.926 [2024-12-10 05:57:24.541688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334033 ] 00:32:06.926 [2024-12-10 05:57:24.622423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.926 [2024-12-10 05:57:24.660104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.183 Running I/O for 1 seconds... 00:32:08.113 1984.00 IOPS, 124.00 MiB/s 00:32:08.113 Latency(us) 00:32:08.113 [2024-12-10T04:57:26.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.113 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:08.113 Verification LBA range: start 0x0 length 0x400 00:32:08.113 Nvme0n1 : 1.02 2016.95 126.06 0.00 0.00 31241.36 4899.60 26838.55 00:32:08.113 [2024-12-10T04:57:26.072Z] =================================================================================================================== 00:32:08.113 [2024-12-10T04:57:26.072Z] Total : 2016.95 126.06 0.00 0.00 31241.36 4899.60 26838.55 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.371 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.371 rmmod nvme_tcp 00:32:08.371 rmmod nvme_fabrics 00:32:08.372 rmmod nvme_keyring 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 333522 ']' 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 333522 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 333522 ']' 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 333522 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333522 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333522' 00:32:08.372 killing process with pid 333522 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 333522 00:32:08.372 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 333522 00:32:08.631 [2024-12-10 05:57:26.399655] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.631 05:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:11.168 00:32:11.168 real 0m13.957s 00:32:11.168 user 0m18.840s 00:32:11.168 sys 0m6.902s 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.168 ************************************ 00:32:11.168 END TEST nvmf_host_management 00:32:11.168 ************************************ 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:11.168 ************************************ 00:32:11.168 START TEST nvmf_lvol 00:32:11.168 ************************************ 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:11.168 * Looking for test storage... 00:32:11.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:11.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.168 --rc genhtml_branch_coverage=1 00:32:11.168 --rc genhtml_function_coverage=1 00:32:11.168 --rc genhtml_legend=1 00:32:11.168 --rc geninfo_all_blocks=1 00:32:11.168 --rc geninfo_unexecuted_blocks=1 00:32:11.168 00:32:11.168 ' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:11.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.168 --rc genhtml_branch_coverage=1 00:32:11.168 --rc genhtml_function_coverage=1 00:32:11.168 --rc genhtml_legend=1 00:32:11.168 --rc geninfo_all_blocks=1 00:32:11.168 --rc geninfo_unexecuted_blocks=1 00:32:11.168 00:32:11.168 ' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:11.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.168 --rc genhtml_branch_coverage=1 00:32:11.168 --rc genhtml_function_coverage=1 00:32:11.168 --rc genhtml_legend=1 00:32:11.168 --rc geninfo_all_blocks=1 00:32:11.168 --rc geninfo_unexecuted_blocks=1 00:32:11.168 00:32:11.168 ' 00:32:11.168 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:11.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.169 --rc genhtml_branch_coverage=1 00:32:11.169 --rc genhtml_function_coverage=1 00:32:11.169 --rc genhtml_legend=1 00:32:11.169 --rc geninfo_all_blocks=1 00:32:11.169 --rc geninfo_unexecuted_blocks=1 00:32:11.169 00:32:11.169 ' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:11.169 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:17.744 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:17.744 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:17.744 Found net devices under 0000:af:00.0: cvl_0_0 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:17.744 Found net devices under 0000:af:00.1: cvl_0_1 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.744 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:32:17.745 00:32:17.745 --- 10.0.0.2 ping statistics --- 00:32:17.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.745 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:17.745 00:32:17.745 --- 10.0.0.1 ping statistics --- 00:32:17.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.745 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=338097 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 338097 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 338097 ']' 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.745 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:17.745 [2024-12-10 05:57:35.660312] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.745 [2024-12-10 05:57:35.661286] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:17.745 [2024-12-10 05:57:35.661328] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.003 [2024-12-10 05:57:35.745244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:18.003 [2024-12-10 05:57:35.785563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.003 [2024-12-10 05:57:35.785599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.003 [2024-12-10 05:57:35.785606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.003 [2024-12-10 05:57:35.785613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.003 [2024-12-10 05:57:35.785618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.003 [2024-12-10 05:57:35.786873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.003 [2024-12-10 05:57:35.786978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.003 [2024-12-10 05:57:35.786979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.003 [2024-12-10 05:57:35.855833] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:18.003 [2024-12-10 05:57:35.856690] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:18.003 [2024-12-10 05:57:35.856860] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:18.003 [2024-12-10 05:57:35.857023] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.570 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:18.828 [2024-12-10 05:57:36.695723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.828 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:19.086 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:19.086 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:19.344 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:19.344 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:19.603 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:19.861 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=072ab2eb-b90b-48dc-b30d-b511e4480bae 00:32:19.861 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 072ab2eb-b90b-48dc-b30d-b511e4480bae lvol 20 00:32:19.861 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f9d56e5b-7ee4-4e34-9fc9-9298148b4006 00:32:19.861 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:20.119 05:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9d56e5b-7ee4-4e34-9fc9-9298148b4006 00:32:20.377 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:20.377 [2024-12-10 05:57:38.315548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.634 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:20.634 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=338621 00:32:20.634 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:20.634 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:22.001 05:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f9d56e5b-7ee4-4e34-9fc9-9298148b4006 MY_SNAPSHOT 00:32:22.001 05:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7ede203b-1bdf-4f48-9d30-27db19900e0e 00:32:22.001 05:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f9d56e5b-7ee4-4e34-9fc9-9298148b4006 30 00:32:22.258 05:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7ede203b-1bdf-4f48-9d30-27db19900e0e MY_CLONE 00:32:22.515 05:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8c7dd99d-d3d0-4197-a9b0-e5f9ee14fa82 00:32:22.515 05:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8c7dd99d-d3d0-4197-a9b0-e5f9ee14fa82 00:32:23.079 05:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 338621 00:32:31.174 Initializing NVMe Controllers 00:32:31.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:31.174 Controller IO queue size 128, less than required. 00:32:31.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:31.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:31.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:31.174 Initialization complete. Launching workers. 00:32:31.174 ======================================================== 00:32:31.174 Latency(us) 00:32:31.174 Device Information : IOPS MiB/s Average min max 00:32:31.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12629.50 49.33 10139.71 5638.44 48052.09 00:32:31.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12744.20 49.78 10044.39 4114.54 75292.44 00:32:31.174 ======================================================== 00:32:31.174 Total : 25373.70 99.12 10091.84 4114.54 75292.44 00:32:31.174 00:32:31.174 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:31.174 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9d56e5b-7ee4-4e34-9fc9-9298148b4006 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 072ab2eb-b90b-48dc-b30d-b511e4480bae 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.432 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.690 rmmod nvme_tcp 00:32:31.690 rmmod nvme_fabrics 00:32:31.690 rmmod nvme_keyring 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 338097 ']' 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 338097 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 338097 ']' 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 338097 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338097 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338097' 00:32:31.690 killing process with pid 338097 00:32:31.690 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 338097 00:32:31.691 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 338097 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.949 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.853 00:32:33.853 real 0m23.193s 00:32:33.853 user 0m55.440s 00:32:33.853 sys 0m10.348s 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:33.853 ************************************ 00:32:33.853 END TEST nvmf_lvol 00:32:33.853 ************************************ 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.853 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:34.113 ************************************ 00:32:34.113 START TEST nvmf_lvs_grow 00:32:34.113 ************************************ 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:34.113 * Looking for test storage... 00:32:34.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:34.113 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.113 --rc genhtml_branch_coverage=1 00:32:34.113 --rc genhtml_function_coverage=1 00:32:34.113 --rc genhtml_legend=1 00:32:34.113 --rc geninfo_all_blocks=1 00:32:34.113 --rc geninfo_unexecuted_blocks=1 00:32:34.113 00:32:34.113 ' 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.113 --rc genhtml_branch_coverage=1 00:32:34.113 --rc genhtml_function_coverage=1 00:32:34.113 --rc genhtml_legend=1 00:32:34.113 --rc geninfo_all_blocks=1 00:32:34.113 --rc geninfo_unexecuted_blocks=1 00:32:34.113 00:32:34.113 ' 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.113 --rc genhtml_branch_coverage=1 00:32:34.113 --rc genhtml_function_coverage=1 00:32:34.113 --rc genhtml_legend=1 00:32:34.113 --rc geninfo_all_blocks=1 00:32:34.113 --rc geninfo_unexecuted_blocks=1 00:32:34.113 00:32:34.113 ' 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.113 --rc genhtml_branch_coverage=1 00:32:34.113 --rc genhtml_function_coverage=1 00:32:34.113 --rc genhtml_legend=1 00:32:34.113 --rc geninfo_all_blocks=1 00:32:34.113 --rc geninfo_unexecuted_blocks=1 00:32:34.113 00:32:34.113 ' 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.113 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.114 05:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:40.681 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:40.682 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:40.682 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:40.682 Found net devices under 0000:af:00.0: cvl_0_0 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:40.682 Found net devices under 0000:af:00.1: cvl_0_1 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:40.682 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:40.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:32:40.941 00:32:40.941 --- 10.0.0.2 ping statistics --- 00:32:40.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.941 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:32:40.941 00:32:40.941 --- 10.0.0.1 ping statistics --- 00:32:40.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.941 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=344230 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 344230 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 344230 ']' 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.941 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:40.941 [2024-12-10 05:57:58.819967] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:40.941 [2024-12-10 05:57:58.820861] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:40.941 [2024-12-10 05:57:58.820894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.201 [2024-12-10 05:57:58.903419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.201 [2024-12-10 05:57:58.942650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.201 [2024-12-10 05:57:58.942687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.201 [2024-12-10 05:57:58.942694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.201 [2024-12-10 05:57:58.942700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.201 [2024-12-10 05:57:58.942709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.201 [2024-12-10 05:57:58.943224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.201 [2024-12-10 05:57:59.009626] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:41.201 [2024-12-10 05:57:59.009823] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.201 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:41.459 [2024-12-10 05:57:59.243862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:41.459 ************************************ 00:32:41.459 START TEST lvs_grow_clean 00:32:41.459 ************************************ 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:41.459 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:41.718 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:41.718 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:41.976 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c5055264-7bde-407e-a71a-3c08f28af27c 00:32:41.976 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:41.976 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:41.976 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:41.976 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:41.976 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5055264-7bde-407e-a71a-3c08f28af27c lvol 150 00:32:42.234 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d0a44687-a653-48fb-8902-666752a37075 00:32:42.234 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:42.234 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:42.493 [2024-12-10 05:58:00.255608] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:42.493 [2024-12-10 05:58:00.255732] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:42.493 true 00:32:42.493 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:42.493 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:42.751 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:42.751 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:42.751 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d0a44687-a653-48fb-8902-666752a37075 00:32:43.010 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.268 [2024-12-10 05:58:01.024078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.268 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=344617 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 344617 /var/tmp/bdevperf.sock 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 344617 ']' 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.526 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:43.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:43.527 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.527 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:43.527 [2024-12-10 05:58:01.273137] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:43.527 [2024-12-10 05:58:01.273186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344617 ] 00:32:43.527 [2024-12-10 05:58:01.351353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.527 [2024-12-10 05:58:01.391771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.784 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.784 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:43.784 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:44.042 Nvme0n1 00:32:44.042 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:44.300 [ 00:32:44.300 { 00:32:44.300 "name": "Nvme0n1", 00:32:44.300 "aliases": [ 00:32:44.300 "d0a44687-a653-48fb-8902-666752a37075" 00:32:44.300 ], 00:32:44.300 "product_name": "NVMe disk", 00:32:44.300 "block_size": 4096, 00:32:44.300 "num_blocks": 38912, 00:32:44.300 "uuid": "d0a44687-a653-48fb-8902-666752a37075", 00:32:44.300 "numa_id": 1, 00:32:44.300 "assigned_rate_limits": { 00:32:44.300 "rw_ios_per_sec": 0, 00:32:44.300 "rw_mbytes_per_sec": 0, 00:32:44.300 "r_mbytes_per_sec": 0, 00:32:44.300 "w_mbytes_per_sec": 0 00:32:44.300 }, 00:32:44.300 "claimed": false, 00:32:44.300 "zoned": false, 00:32:44.300 "supported_io_types": { 00:32:44.300 "read": true, 00:32:44.300 "write": true, 00:32:44.300 "unmap": true, 00:32:44.300 "flush": true, 00:32:44.300 "reset": true, 00:32:44.300 "nvme_admin": true, 00:32:44.300 "nvme_io": true, 00:32:44.300 "nvme_io_md": false, 00:32:44.300 "write_zeroes": true, 00:32:44.300 "zcopy": false, 00:32:44.300 "get_zone_info": false, 00:32:44.300 "zone_management": false, 00:32:44.300 "zone_append": false, 00:32:44.300 "compare": true, 00:32:44.300 "compare_and_write": true, 00:32:44.300 "abort": true, 00:32:44.300 "seek_hole": false, 00:32:44.300 "seek_data": false, 00:32:44.300 "copy": true, 00:32:44.300 "nvme_iov_md": false 00:32:44.300 }, 00:32:44.300 "memory_domains": [ 00:32:44.300 { 00:32:44.300 "dma_device_id": "system", 00:32:44.300 "dma_device_type": 1 00:32:44.300 } 00:32:44.300 ], 00:32:44.300 "driver_specific": { 00:32:44.300 "nvme": [ 00:32:44.300 { 00:32:44.300 "trid": { 00:32:44.300 "trtype": "TCP", 00:32:44.300 "adrfam": "IPv4", 00:32:44.300 "traddr": "10.0.0.2", 00:32:44.300 "trsvcid": "4420", 00:32:44.300 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:44.300 }, 00:32:44.300 "ctrlr_data": { 00:32:44.300 "cntlid": 1, 00:32:44.300 "vendor_id": "0x8086", 00:32:44.300 "model_number": "SPDK bdev Controller", 00:32:44.300 "serial_number": "SPDK0", 00:32:44.300 "firmware_revision": "25.01", 00:32:44.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.300 "oacs": { 00:32:44.300 "security": 0, 00:32:44.300 "format": 0, 00:32:44.300 "firmware": 0, 00:32:44.300 "ns_manage": 0 00:32:44.300 }, 00:32:44.300 "multi_ctrlr": true, 00:32:44.300 "ana_reporting": false 00:32:44.300 }, 00:32:44.300 "vs": { 00:32:44.300 "nvme_version": "1.3" 00:32:44.300 }, 00:32:44.300 "ns_data": { 00:32:44.300 "id": 1, 00:32:44.300 "can_share": true 00:32:44.300 } 00:32:44.300 } 00:32:44.300 ], 00:32:44.300 "mp_policy": "active_passive" 00:32:44.300 } 00:32:44.300 } 00:32:44.300 ] 00:32:44.300 05:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:44.300 05:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=344842 00:32:44.300 05:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:44.300 Running I/O for 10 seconds... 00:32:45.673 Latency(us) 00:32:45.673 [2024-12-10T04:58:03.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.673 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:32:45.673 [2024-12-10T04:58:03.632Z] =================================================================================================================== 00:32:45.673 [2024-12-10T04:58:03.632Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:32:45.673 00:32:46.238 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:46.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.496 Nvme0n1 : 2.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:32:46.496 [2024-12-10T04:58:04.455Z] =================================================================================================================== 00:32:46.497 [2024-12-10T04:58:04.456Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:32:46.497 00:32:46.497 true 00:32:46.497 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:46.497 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:46.810 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:46.810 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:46.810 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 344842 00:32:47.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.429 Nvme0n1 : 3.00 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:32:47.429 [2024-12-10T04:58:05.388Z] =================================================================================================================== 00:32:47.429 [2024-12-10T04:58:05.388Z] Total : 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:32:47.429 00:32:48.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.362 Nvme0n1 : 4.00 23399.75 91.41 0.00 0.00 0.00 0.00 0.00 00:32:48.362 [2024-12-10T04:58:06.321Z] =================================================================================================================== 00:32:48.362 [2024-12-10T04:58:06.321Z] Total : 23399.75 91.41 0.00 0.00 0.00 0.00 0.00 00:32:48.362 00:32:49.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.296 Nvme0n1 : 5.00 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:32:49.296 [2024-12-10T04:58:07.255Z] =================================================================================================================== 00:32:49.296 [2024-12-10T04:58:07.255Z] Total : 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:32:49.296 00:32:50.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.669 Nvme0n1 : 6.00 23516.17 91.86 0.00 0.00 0.00 0.00 0.00 00:32:50.669 [2024-12-10T04:58:08.628Z] =================================================================================================================== 00:32:50.669 [2024-12-10T04:58:08.628Z] Total : 23516.17 91.86 0.00 0.00 0.00 0.00 0.00 00:32:50.669 00:32:51.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.603 Nvme0n1 : 7.00 23551.86 92.00 0.00 0.00 0.00 0.00 0.00 00:32:51.603 [2024-12-10T04:58:09.562Z] =================================================================================================================== 00:32:51.603 [2024-12-10T04:58:09.562Z] Total : 23551.86 92.00 0.00 0.00 0.00 0.00 0.00 00:32:51.603 00:32:52.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.537 Nvme0n1 : 8.00 23560.62 92.03 0.00 0.00 0.00 0.00 0.00 00:32:52.537 [2024-12-10T04:58:10.496Z] =================================================================================================================== 00:32:52.537 [2024-12-10T04:58:10.496Z] Total : 23560.62 92.03 0.00 0.00 0.00 0.00 0.00 00:32:52.537 00:32:53.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.472 Nvme0n1 : 9.00 23569.33 92.07 0.00 0.00 0.00 0.00 0.00 00:32:53.472 [2024-12-10T04:58:11.431Z] =================================================================================================================== 00:32:53.472 [2024-12-10T04:58:11.431Z] Total : 23569.33 92.07 0.00 0.00 0.00 0.00 0.00 00:32:53.472 00:32:54.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.406 Nvme0n1 : 10.00 23587.30 92.14 0.00 0.00 0.00 0.00 0.00 00:32:54.406 [2024-12-10T04:58:12.365Z] =================================================================================================================== 00:32:54.406 [2024-12-10T04:58:12.365Z] Total : 23587.30 92.14 0.00 0.00 0.00 0.00 0.00 00:32:54.406 00:32:54.406 00:32:54.406 Latency(us) 00:32:54.406 [2024-12-10T04:58:12.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.406 Nvme0n1 : 10.00 23593.35 92.16 0.00 0.00 5422.29 3198.78 26339.23 00:32:54.406 [2024-12-10T04:58:12.365Z] =================================================================================================================== 00:32:54.406 [2024-12-10T04:58:12.365Z] Total : 23593.35 92.16 0.00 0.00 5422.29 3198.78 26339.23 00:32:54.406 { 00:32:54.406 "results": [ 00:32:54.406 { 00:32:54.406 "job": "Nvme0n1", 00:32:54.406 "core_mask": "0x2", 00:32:54.406 "workload": "randwrite", 00:32:54.406 "status": "finished", 00:32:54.406 "queue_depth": 128, 00:32:54.406 "io_size": 4096, 00:32:54.406 "runtime": 10.002863, 00:32:54.406 "iops": 23593.345225262008, 00:32:54.406 "mibps": 92.16150478617972, 00:32:54.406 "io_failed": 0, 00:32:54.406 "io_timeout": 0, 00:32:54.406 "avg_latency_us": 5422.29461862248, 00:32:54.406 "min_latency_us": 3198.7809523809524, 00:32:54.406 "max_latency_us": 26339.230476190478 00:32:54.406 } 00:32:54.406 ], 00:32:54.406 "core_count": 1 00:32:54.406 } 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 344617 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 344617 ']' 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 344617 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 344617 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 344617' 00:32:54.406 killing process with pid 344617 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 344617 00:32:54.406 Received shutdown signal, test time was about 10.000000 seconds 00:32:54.406 00:32:54.406 Latency(us) 00:32:54.406 [2024-12-10T04:58:12.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.406 [2024-12-10T04:58:12.365Z] =================================================================================================================== 00:32:54.406 [2024-12-10T04:58:12.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.406 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 344617 00:32:54.665 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:54.923 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:54.923 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:54.923 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:55.182 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:55.182 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:55.182 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:55.440 [2024-12-10 05:58:13.235674] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:55.440 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:55.699 request: 00:32:55.699 { 00:32:55.699 "uuid": "c5055264-7bde-407e-a71a-3c08f28af27c", 00:32:55.699 "method": "bdev_lvol_get_lvstores", 00:32:55.699 "req_id": 1 00:32:55.699 } 00:32:55.699 Got JSON-RPC error response 00:32:55.699 response: 00:32:55.699 { 00:32:55.699 "code": -19, 00:32:55.699 "message": "No such device" 00:32:55.699 } 00:32:55.699 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:55.699 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:55.699 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:55.699 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:55.699 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:55.958 aio_bdev 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d0a44687-a653-48fb-8902-666752a37075 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d0a44687-a653-48fb-8902-666752a37075 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:55.958 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d0a44687-a653-48fb-8902-666752a37075 -t 2000 00:32:56.216 [ 00:32:56.216 { 00:32:56.216 "name": "d0a44687-a653-48fb-8902-666752a37075", 00:32:56.216 "aliases": [ 00:32:56.216 "lvs/lvol" 00:32:56.216 ], 00:32:56.216 "product_name": "Logical Volume", 00:32:56.216 "block_size": 4096, 00:32:56.216 "num_blocks": 38912, 00:32:56.216 "uuid": "d0a44687-a653-48fb-8902-666752a37075", 00:32:56.216 "assigned_rate_limits": { 00:32:56.216 "rw_ios_per_sec": 0, 00:32:56.216 "rw_mbytes_per_sec": 0, 00:32:56.216 "r_mbytes_per_sec": 0, 00:32:56.216 "w_mbytes_per_sec": 0 00:32:56.216 }, 00:32:56.216 "claimed": false, 00:32:56.216 "zoned": false, 00:32:56.216 "supported_io_types": { 00:32:56.216 "read": true, 00:32:56.216 "write": true, 00:32:56.216 "unmap": true, 00:32:56.216 "flush": false, 00:32:56.216 "reset": true, 00:32:56.216 "nvme_admin": false, 00:32:56.216 "nvme_io": false, 00:32:56.216 "nvme_io_md": false, 00:32:56.216 "write_zeroes": true, 00:32:56.216 "zcopy": false, 00:32:56.216 "get_zone_info": false, 00:32:56.216 "zone_management": false, 00:32:56.216 "zone_append": false, 00:32:56.216 "compare": false, 00:32:56.216 "compare_and_write": false, 00:32:56.216 "abort": false, 00:32:56.216 "seek_hole": true, 00:32:56.216 "seek_data": true, 00:32:56.216 "copy": false, 00:32:56.216 "nvme_iov_md": false 00:32:56.216 }, 00:32:56.216 "driver_specific": { 00:32:56.216 "lvol": { 00:32:56.216 "lvol_store_uuid": "c5055264-7bde-407e-a71a-3c08f28af27c", 00:32:56.216 "base_bdev": "aio_bdev", 00:32:56.216 "thin_provision": false, 00:32:56.216 "num_allocated_clusters": 38, 00:32:56.216 "snapshot": false, 00:32:56.216 "clone": false, 00:32:56.216 "esnap_clone": false 00:32:56.216 } 00:32:56.216 } 00:32:56.216 } 00:32:56.216 ] 00:32:56.216 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:56.216 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:56.216 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:56.476 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:56.476 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:56.476 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:56.734 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:56.734 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d0a44687-a653-48fb-8902-666752a37075 00:32:56.734 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5055264-7bde-407e-a71a-3c08f28af27c 00:32:56.993 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:57.251 00:32:57.251 real 0m15.774s 00:32:57.251 user 0m15.292s 00:32:57.251 sys 0m1.487s 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.251 ************************************ 00:32:57.251 END TEST lvs_grow_clean 00:32:57.251 ************************************ 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.251 ************************************ 00:32:57.251 START TEST lvs_grow_dirty 00:32:57.251 ************************************ 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:57.251 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:57.509 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:57.509 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:57.766 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:32:57.767 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:32:57.767 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:58.025 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:58.025 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:58.025 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d lvol 150 00:32:58.025 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:32:58.025 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:58.025 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:58.284 [2024-12-10 05:58:16.131616] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:58.284 [2024-12-10 05:58:16.131741] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:58.284 true 00:32:58.284 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:32:58.284 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:58.542 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:58.542 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:58.801 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:32:58.801 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:59.059 [2024-12-10 05:58:16.888037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.059 05:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=347169 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 347169 /var/tmp/bdevperf.sock 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 347169 ']' 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:59.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.317 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:59.317 [2024-12-10 05:58:17.143603] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:32:59.317 [2024-12-10 05:58:17.143652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347169 ] 00:32:59.317 [2024-12-10 05:58:17.223514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.317 [2024-12-10 05:58:17.263900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.575 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.575 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:59.575 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:59.902 Nvme0n1 00:32:59.902 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:59.902 [ 00:32:59.902 { 00:32:59.902 "name": "Nvme0n1", 00:32:59.902 "aliases": [ 00:32:59.902 "192f26bd-10c8-4415-a50e-2ee2cc61c6b7" 00:32:59.902 ], 00:32:59.902 "product_name": "NVMe disk", 00:32:59.902 "block_size": 4096, 00:32:59.902 "num_blocks": 38912, 00:32:59.902 "uuid": "192f26bd-10c8-4415-a50e-2ee2cc61c6b7", 00:32:59.902 "numa_id": 1, 00:32:59.902 "assigned_rate_limits": { 00:32:59.902 "rw_ios_per_sec": 0, 00:32:59.902 "rw_mbytes_per_sec": 0, 00:32:59.902 "r_mbytes_per_sec": 0, 00:32:59.902 "w_mbytes_per_sec": 0 00:32:59.902 }, 00:32:59.902 "claimed": false, 00:32:59.902 "zoned": false, 00:32:59.902 "supported_io_types": { 00:32:59.902 "read": true, 00:32:59.902 "write": true, 00:32:59.902 "unmap": true, 00:32:59.902 "flush": true, 00:32:59.902 "reset": true, 00:32:59.902 "nvme_admin": true, 00:32:59.902 "nvme_io": true, 00:32:59.902 "nvme_io_md": false, 00:32:59.902 "write_zeroes": true, 00:32:59.902 "zcopy": false, 00:32:59.902 "get_zone_info": false, 00:32:59.902 "zone_management": false, 00:32:59.902 "zone_append": false, 00:32:59.902 "compare": true, 00:32:59.902 "compare_and_write": true, 00:32:59.902 "abort": true, 00:32:59.902 "seek_hole": false, 00:32:59.902 "seek_data": false, 00:32:59.902 "copy": true, 00:32:59.902 "nvme_iov_md": false 00:32:59.902 }, 00:32:59.902 "memory_domains": [ 00:32:59.902 { 00:32:59.902 "dma_device_id": "system", 00:32:59.902 "dma_device_type": 1 00:32:59.902 } 00:32:59.902 ], 00:32:59.902 "driver_specific": { 00:32:59.902 "nvme": [ 00:32:59.902 { 00:32:59.902 "trid": { 00:32:59.902 "trtype": "TCP", 00:32:59.902 "adrfam": "IPv4", 00:32:59.902 "traddr": "10.0.0.2", 00:32:59.902 "trsvcid": "4420", 00:32:59.902 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:59.902 }, 00:32:59.902 "ctrlr_data": { 00:32:59.902 "cntlid": 1, 00:32:59.902 "vendor_id": "0x8086", 00:32:59.902 "model_number": "SPDK bdev Controller", 00:32:59.902 "serial_number": "SPDK0", 00:32:59.902 "firmware_revision": "25.01", 00:32:59.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.903 "oacs": { 00:32:59.903 "security": 0, 00:32:59.903 "format": 0, 00:32:59.903 "firmware": 0, 00:32:59.903 "ns_manage": 0 00:32:59.903 }, 00:32:59.903 "multi_ctrlr": true, 00:32:59.903 "ana_reporting": false 00:32:59.903 }, 00:32:59.903 "vs": { 00:32:59.903 "nvme_version": "1.3" 00:32:59.903 }, 00:32:59.903 "ns_data": { 00:32:59.903 "id": 1, 00:32:59.903 "can_share": true 00:32:59.903 } 00:32:59.903 } 00:32:59.903 ], 00:32:59.903 "mp_policy": "active_passive" 00:32:59.903 } 00:32:59.903 } 00:32:59.903 ] 00:32:59.903 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=347391 00:32:59.903 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:59.903 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:00.160 Running I/O for 10 seconds... 00:33:01.094 Latency(us) 00:33:01.094 [2024-12-10T04:58:19.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.094 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:33:01.094 [2024-12-10T04:58:19.053Z] =================================================================================================================== 00:33:01.094 [2024-12-10T04:58:19.053Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:33:01.094 00:33:02.028 05:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:02.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.028 Nvme0n1 : 2.00 23313.00 91.07 0.00 0.00 0.00 0.00 0.00 00:33:02.028 [2024-12-10T04:58:19.987Z] =================================================================================================================== 00:33:02.028 [2024-12-10T04:58:19.987Z] Total : 23313.00 91.07 0.00 0.00 0.00 0.00 0.00 00:33:02.028 00:33:02.028 true 00:33:02.286 05:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:02.286 05:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:02.286 05:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:02.286 05:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:02.286 05:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 347391 00:33:03.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.220 Nvme0n1 : 3.00 23289.00 90.97 0.00 0.00 0.00 0.00 0.00 00:33:03.220 [2024-12-10T04:58:21.179Z] =================================================================================================================== 00:33:03.220 [2024-12-10T04:58:21.179Z] Total : 23289.00 90.97 0.00 0.00 0.00 0.00 0.00 00:33:03.220 00:33:04.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.154 Nvme0n1 : 4.00 23372.25 91.30 0.00 0.00 0.00 0.00 0.00 00:33:04.154 [2024-12-10T04:58:22.113Z] =================================================================================================================== 00:33:04.154 [2024-12-10T04:58:22.113Z] Total : 23372.25 91.30 0.00 0.00 0.00 0.00 0.00 00:33:04.154 00:33:05.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.087 Nvme0n1 : 5.00 23473.00 91.69 0.00 0.00 0.00 0.00 0.00 00:33:05.087 [2024-12-10T04:58:23.046Z] =================================================================================================================== 00:33:05.087 [2024-12-10T04:58:23.046Z] Total : 23473.00 91.69 0.00 0.00 0.00 0.00 0.00 00:33:05.087 00:33:06.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.021 Nvme0n1 : 6.00 23519.00 91.87 0.00 0.00 0.00 0.00 0.00 00:33:06.021 [2024-12-10T04:58:23.980Z] =================================================================================================================== 00:33:06.021 [2024-12-10T04:58:23.980Z] Total : 23519.00 91.87 0.00 0.00 0.00 0.00 0.00 00:33:06.021 00:33:06.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.954 Nvme0n1 : 7.00 23570.00 92.07 0.00 0.00 0.00 0.00 0.00 00:33:06.954 [2024-12-10T04:58:24.913Z] =================================================================================================================== 00:33:06.954 [2024-12-10T04:58:24.913Z] Total : 23570.00 92.07 0.00 0.00 0.00 0.00 0.00 00:33:06.954 00:33:08.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:08.327 Nvme0n1 : 8.00 23608.25 92.22 0.00 0.00 0.00 0.00 0.00 00:33:08.327 [2024-12-10T04:58:26.286Z] =================================================================================================================== 00:33:08.327 [2024-12-10T04:58:26.286Z] Total : 23608.25 92.22 0.00 0.00 0.00 0.00 0.00 00:33:08.327 00:33:09.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:09.261 Nvme0n1 : 9.00 23638.00 92.34 0.00 0.00 0.00 0.00 0.00 00:33:09.261 [2024-12-10T04:58:27.220Z] =================================================================================================================== 00:33:09.261 [2024-12-10T04:58:27.220Z] Total : 23638.00 92.34 0.00 0.00 0.00 0.00 0.00 00:33:09.261 00:33:10.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:10.194 Nvme0n1 : 10.00 23661.80 92.43 0.00 0.00 0.00 0.00 0.00 00:33:10.194 [2024-12-10T04:58:28.153Z] =================================================================================================================== 00:33:10.194 [2024-12-10T04:58:28.153Z] Total : 23661.80 92.43 0.00 0.00 0.00 0.00 0.00 00:33:10.194 00:33:10.194 00:33:10.194 Latency(us) 00:33:10.194 [2024-12-10T04:58:28.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:10.194 Nvme0n1 : 10.00 23668.99 92.46 0.00 0.00 5405.13 3214.38 26838.55 00:33:10.194 [2024-12-10T04:58:28.153Z] =================================================================================================================== 00:33:10.194 [2024-12-10T04:58:28.153Z] Total : 23668.99 92.46 0.00 0.00 5405.13 3214.38 26838.55 00:33:10.194 { 00:33:10.194 "results": [ 00:33:10.194 { 00:33:10.194 "job": "Nvme0n1", 00:33:10.194 "core_mask": "0x2", 00:33:10.194 "workload": "randwrite", 00:33:10.194 "status": "finished", 00:33:10.194 "queue_depth": 128, 00:33:10.194 "io_size": 4096, 00:33:10.194 "runtime": 10.00237, 00:33:10.194 "iops": 23668.990449263525, 00:33:10.194 "mibps": 92.45699394243564, 00:33:10.194 "io_failed": 0, 00:33:10.194 "io_timeout": 0, 00:33:10.194 "avg_latency_us": 5405.128615220733, 00:33:10.194 "min_latency_us": 3214.384761904762, 00:33:10.194 "max_latency_us": 26838.55238095238 00:33:10.194 } 00:33:10.194 ], 00:33:10.194 "core_count": 1 00:33:10.194 } 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 347169 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 347169 ']' 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 347169 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 347169 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:10.194 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 347169' 00:33:10.194 killing process with pid 347169 00:33:10.195 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 347169 00:33:10.195 Received shutdown signal, test time was about 10.000000 seconds 00:33:10.195 00:33:10.195 Latency(us) 00:33:10.195 [2024-12-10T04:58:28.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.195 [2024-12-10T04:58:28.154Z] =================================================================================================================== 00:33:10.195 [2024-12-10T04:58:28.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:10.195 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 347169 00:33:10.195 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.453 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:10.711 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:10.711 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 344230 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 344230 00:33:10.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 344230 Killed "${NVMF_APP[@]}" "$@" 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=349053 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 349053 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 349053 ']' 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.970 05:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:10.970 [2024-12-10 05:58:28.851564] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.970 [2024-12-10 05:58:28.852484] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:10.970 [2024-12-10 05:58:28.852522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.229 [2024-12-10 05:58:28.935380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.229 [2024-12-10 05:58:28.974153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.229 [2024-12-10 05:58:28.974193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.229 [2024-12-10 05:58:28.974200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.229 [2024-12-10 05:58:28.974206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.229 [2024-12-10 05:58:28.974211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.229 [2024-12-10 05:58:28.974733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.229 [2024-12-10 05:58:29.042237] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:11.229 [2024-12-10 05:58:29.042435] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.229 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:11.488 [2024-12-10 05:58:29.288104] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:11.488 [2024-12-10 05:58:29.288401] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:11.488 [2024-12-10 05:58:29.288496] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:11.488 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:11.747 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 192f26bd-10c8-4415-a50e-2ee2cc61c6b7 -t 2000 00:33:11.747 [ 00:33:11.747 { 00:33:11.747 "name": "192f26bd-10c8-4415-a50e-2ee2cc61c6b7", 00:33:11.747 "aliases": [ 00:33:11.747 "lvs/lvol" 00:33:11.747 ], 00:33:11.747 "product_name": "Logical Volume", 00:33:11.747 "block_size": 4096, 00:33:11.747 "num_blocks": 38912, 00:33:11.747 "uuid": "192f26bd-10c8-4415-a50e-2ee2cc61c6b7", 00:33:11.747 "assigned_rate_limits": { 00:33:11.747 "rw_ios_per_sec": 0, 00:33:11.747 "rw_mbytes_per_sec": 0, 00:33:11.747 "r_mbytes_per_sec": 0, 00:33:11.747 "w_mbytes_per_sec": 0 00:33:11.747 }, 00:33:11.747 "claimed": false, 00:33:11.747 "zoned": false, 00:33:11.747 "supported_io_types": { 00:33:11.747 "read": true, 00:33:11.747 "write": true, 00:33:11.747 "unmap": true, 00:33:11.747 "flush": false, 00:33:11.747 "reset": true, 00:33:11.747 "nvme_admin": false, 00:33:11.747 "nvme_io": false, 00:33:11.747 "nvme_io_md": false, 00:33:11.747 "write_zeroes": true, 00:33:11.747 "zcopy": false, 00:33:11.747 "get_zone_info": false, 00:33:11.747 "zone_management": false, 00:33:11.747 "zone_append": false, 00:33:11.747 "compare": false, 00:33:11.747 "compare_and_write": false, 00:33:11.747 "abort": false, 00:33:11.747 "seek_hole": true, 00:33:11.747 "seek_data": true, 00:33:11.747 "copy": false, 00:33:11.747 "nvme_iov_md": false 00:33:11.747 }, 00:33:11.747 "driver_specific": { 00:33:11.747 "lvol": { 00:33:11.747 "lvol_store_uuid": "f84d8b57-1135-4e5a-b4f3-9947d7c9285d", 00:33:11.747 "base_bdev": "aio_bdev", 00:33:11.747 "thin_provision": false, 00:33:11.747 "num_allocated_clusters": 38, 00:33:11.747 "snapshot": false, 00:33:11.747 "clone": false, 00:33:11.747 "esnap_clone": false 00:33:11.747 } 00:33:11.747 } 00:33:11.747 } 00:33:11.747 ] 00:33:11.747 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:11.747 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:11.747 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:12.006 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:12.006 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:12.006 05:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:12.265 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:12.265 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:12.265 [2024-12-10 05:58:30.207314] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:12.524 request: 00:33:12.524 { 00:33:12.524 "uuid": "f84d8b57-1135-4e5a-b4f3-9947d7c9285d", 00:33:12.524 "method": "bdev_lvol_get_lvstores", 00:33:12.524 "req_id": 1 00:33:12.524 } 00:33:12.524 Got JSON-RPC error response 00:33:12.524 response: 00:33:12.524 { 00:33:12.524 "code": -19, 00:33:12.524 "message": "No such device" 00:33:12.524 } 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.524 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:12.782 aio_bdev 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:12.782 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:13.040 05:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 192f26bd-10c8-4415-a50e-2ee2cc61c6b7 -t 2000 00:33:13.297 [ 00:33:13.297 { 00:33:13.297 "name": "192f26bd-10c8-4415-a50e-2ee2cc61c6b7", 00:33:13.297 "aliases": [ 00:33:13.297 "lvs/lvol" 00:33:13.297 ], 00:33:13.297 "product_name": "Logical Volume", 00:33:13.297 "block_size": 4096, 00:33:13.297 "num_blocks": 38912, 00:33:13.297 "uuid": "192f26bd-10c8-4415-a50e-2ee2cc61c6b7", 00:33:13.297 "assigned_rate_limits": { 00:33:13.297 "rw_ios_per_sec": 0, 00:33:13.297 "rw_mbytes_per_sec": 0, 00:33:13.297 "r_mbytes_per_sec": 0, 00:33:13.297 "w_mbytes_per_sec": 0 00:33:13.297 }, 00:33:13.297 "claimed": false, 00:33:13.297 "zoned": false, 00:33:13.297 "supported_io_types": { 00:33:13.297 "read": true, 00:33:13.297 "write": true, 00:33:13.297 "unmap": true, 00:33:13.297 "flush": false, 00:33:13.297 "reset": true, 00:33:13.297 "nvme_admin": false, 00:33:13.297 "nvme_io": false, 00:33:13.297 "nvme_io_md": false, 00:33:13.297 "write_zeroes": true, 00:33:13.297 "zcopy": false, 00:33:13.297 "get_zone_info": false, 00:33:13.297 "zone_management": false, 00:33:13.297 "zone_append": false, 00:33:13.297 "compare": false, 00:33:13.297 "compare_and_write": false, 00:33:13.297 "abort": false, 00:33:13.297 "seek_hole": true, 00:33:13.297 "seek_data": true, 00:33:13.297 "copy": false, 00:33:13.297 "nvme_iov_md": false 00:33:13.297 }, 00:33:13.297 "driver_specific": { 00:33:13.297 "lvol": { 00:33:13.297 "lvol_store_uuid": "f84d8b57-1135-4e5a-b4f3-9947d7c9285d", 00:33:13.297 "base_bdev": "aio_bdev", 00:33:13.297 "thin_provision": false, 00:33:13.297 "num_allocated_clusters": 38, 00:33:13.297 "snapshot": false, 00:33:13.297 "clone": false, 00:33:13.297 "esnap_clone": false 00:33:13.297 } 00:33:13.297 } 00:33:13.297 } 00:33:13.297 ] 00:33:13.297 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:13.297 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:13.297 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:13.297 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:13.297 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:13.297 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:13.555 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:13.555 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 192f26bd-10c8-4415-a50e-2ee2cc61c6b7 00:33:13.813 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f84d8b57-1135-4e5a-b4f3-9947d7c9285d 00:33:14.072 05:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:14.072 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.331 00:33:14.331 real 0m16.908s 00:33:14.331 user 0m34.210s 00:33:14.331 sys 0m3.892s 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:14.331 ************************************ 00:33:14.331 END TEST lvs_grow_dirty 00:33:14.331 ************************************ 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:14.331 nvmf_trace.0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.331 rmmod nvme_tcp 00:33:14.331 rmmod nvme_fabrics 00:33:14.331 rmmod nvme_keyring 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 349053 ']' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 349053 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 349053 ']' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 349053 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349053 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349053' 00:33:14.331 killing process with pid 349053 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 349053 00:33:14.331 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 349053 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.590 05:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.127 00:33:17.127 real 0m42.647s 00:33:17.127 user 0m52.284s 00:33:17.127 sys 0m10.808s 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:17.127 ************************************ 00:33:17.127 END TEST nvmf_lvs_grow 00:33:17.127 ************************************ 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:17.127 ************************************ 00:33:17.127 START TEST nvmf_bdev_io_wait 00:33:17.127 ************************************ 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:17.127 * Looking for test storage... 00:33:17.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:17.127 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:17.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.128 --rc genhtml_branch_coverage=1 00:33:17.128 --rc genhtml_function_coverage=1 00:33:17.128 --rc genhtml_legend=1 00:33:17.128 --rc geninfo_all_blocks=1 00:33:17.128 --rc geninfo_unexecuted_blocks=1 00:33:17.128 00:33:17.128 ' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:17.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.128 --rc genhtml_branch_coverage=1 00:33:17.128 --rc genhtml_function_coverage=1 00:33:17.128 --rc genhtml_legend=1 00:33:17.128 --rc geninfo_all_blocks=1 00:33:17.128 --rc geninfo_unexecuted_blocks=1 00:33:17.128 00:33:17.128 ' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:17.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.128 --rc genhtml_branch_coverage=1 00:33:17.128 --rc genhtml_function_coverage=1 00:33:17.128 --rc genhtml_legend=1 00:33:17.128 --rc geninfo_all_blocks=1 00:33:17.128 --rc geninfo_unexecuted_blocks=1 00:33:17.128 00:33:17.128 ' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:17.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.128 --rc genhtml_branch_coverage=1 00:33:17.128 --rc genhtml_function_coverage=1 00:33:17.128 --rc genhtml_legend=1 00:33:17.128 --rc geninfo_all_blocks=1 00:33:17.128 --rc geninfo_unexecuted_blocks=1 00:33:17.128 00:33:17.128 ' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.128 05:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.698 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:23.699 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:23.699 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:23.699 Found net devices under 0000:af:00.0: cvl_0_0 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:23.699 Found net devices under 0000:af:00.1: cvl_0_1 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:33:23.699 00:33:23.699 --- 10.0.0.2 ping statistics --- 00:33:23.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.699 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:33:23.699 00:33:23.699 --- 10.0.0.1 ping statistics --- 00:33:23.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.699 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=353507 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 353507 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 353507 ']' 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.699 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.700 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.700 05:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:23.700 [2024-12-10 05:58:41.606952] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.700 [2024-12-10 05:58:41.607809] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:23.700 [2024-12-10 05:58:41.607843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.959 [2024-12-10 05:58:41.692560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.959 [2024-12-10 05:58:41.735711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.959 [2024-12-10 05:58:41.735749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.959 [2024-12-10 05:58:41.735756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.959 [2024-12-10 05:58:41.735762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.959 [2024-12-10 05:58:41.735768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.959 [2024-12-10 05:58:41.737298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.959 [2024-12-10 05:58:41.737344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.959 [2024-12-10 05:58:41.737448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.959 [2024-12-10 05:58:41.737450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.959 [2024-12-10 05:58:41.737785] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.527 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.785 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.786 [2024-12-10 05:58:42.559845] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:24.786 [2024-12-10 05:58:42.560229] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:24.786 [2024-12-10 05:58:42.560423] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:24.786 [2024-12-10 05:58:42.560586] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.786 [2024-12-10 05:58:42.570113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.786 Malloc0 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.786 [2024-12-10 05:58:42.646503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=353749 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=353751 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.786 { 00:33:24.786 "params": { 00:33:24.786 "name": "Nvme$subsystem", 00:33:24.786 "trtype": "$TEST_TRANSPORT", 00:33:24.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.786 "adrfam": "ipv4", 00:33:24.786 "trsvcid": "$NVMF_PORT", 00:33:24.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.786 "hdgst": ${hdgst:-false}, 00:33:24.786 "ddgst": ${ddgst:-false} 00:33:24.786 }, 00:33:24.786 "method": "bdev_nvme_attach_controller" 00:33:24.786 } 00:33:24.786 EOF 00:33:24.786 )") 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=353753 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.786 { 00:33:24.786 "params": { 00:33:24.786 "name": "Nvme$subsystem", 00:33:24.786 "trtype": "$TEST_TRANSPORT", 00:33:24.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.786 "adrfam": "ipv4", 00:33:24.786 "trsvcid": "$NVMF_PORT", 00:33:24.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.786 "hdgst": ${hdgst:-false}, 00:33:24.786 "ddgst": ${ddgst:-false} 00:33:24.786 }, 00:33:24.786 "method": "bdev_nvme_attach_controller" 00:33:24.786 } 00:33:24.786 EOF 00:33:24.786 )") 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=353756 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.786 { 00:33:24.786 "params": { 00:33:24.786 "name": "Nvme$subsystem", 00:33:24.786 "trtype": "$TEST_TRANSPORT", 00:33:24.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.786 "adrfam": "ipv4", 00:33:24.786 "trsvcid": "$NVMF_PORT", 00:33:24.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.786 "hdgst": ${hdgst:-false}, 00:33:24.786 "ddgst": ${ddgst:-false} 00:33:24.786 }, 00:33:24.786 "method": "bdev_nvme_attach_controller" 00:33:24.786 } 00:33:24.786 EOF 00:33:24.786 )") 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.786 { 00:33:24.786 "params": { 00:33:24.786 "name": "Nvme$subsystem", 00:33:24.786 "trtype": "$TEST_TRANSPORT", 00:33:24.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.786 "adrfam": "ipv4", 00:33:24.786 "trsvcid": "$NVMF_PORT", 00:33:24.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.786 "hdgst": ${hdgst:-false}, 00:33:24.786 "ddgst": ${ddgst:-false} 00:33:24.786 }, 00:33:24.786 "method": "bdev_nvme_attach_controller" 00:33:24.786 } 00:33:24.786 EOF 00:33:24.786 )") 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 353749 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:24.786 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.786 "params": { 00:33:24.786 "name": "Nvme1", 00:33:24.786 "trtype": "tcp", 00:33:24.786 "traddr": "10.0.0.2", 00:33:24.787 "adrfam": "ipv4", 00:33:24.787 "trsvcid": "4420", 00:33:24.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:24.787 "hdgst": false, 00:33:24.787 "ddgst": false 00:33:24.787 }, 00:33:24.787 "method": "bdev_nvme_attach_controller" 00:33:24.787 }' 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.787 "params": { 00:33:24.787 "name": "Nvme1", 00:33:24.787 "trtype": "tcp", 00:33:24.787 "traddr": "10.0.0.2", 00:33:24.787 "adrfam": "ipv4", 00:33:24.787 "trsvcid": "4420", 00:33:24.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:24.787 "hdgst": false, 00:33:24.787 "ddgst": false 00:33:24.787 }, 00:33:24.787 "method": "bdev_nvme_attach_controller" 00:33:24.787 }' 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.787 "params": { 00:33:24.787 "name": "Nvme1", 00:33:24.787 "trtype": "tcp", 00:33:24.787 "traddr": "10.0.0.2", 00:33:24.787 "adrfam": "ipv4", 00:33:24.787 "trsvcid": "4420", 00:33:24.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:24.787 "hdgst": false, 00:33:24.787 "ddgst": false 00:33:24.787 }, 00:33:24.787 "method": "bdev_nvme_attach_controller" 00:33:24.787 }' 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:24.787 05:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.787 "params": { 00:33:24.787 "name": "Nvme1", 00:33:24.787 "trtype": "tcp", 00:33:24.787 "traddr": "10.0.0.2", 00:33:24.787 "adrfam": "ipv4", 00:33:24.787 "trsvcid": "4420", 00:33:24.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:24.787 "hdgst": false, 00:33:24.787 "ddgst": false 00:33:24.787 }, 00:33:24.787 "method": "bdev_nvme_attach_controller" 00:33:24.787 }' 00:33:24.787 [2024-12-10 05:58:42.699507] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:24.787 [2024-12-10 05:58:42.699508] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:24.787 [2024-12-10 05:58:42.699503] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:24.787 [2024-12-10 05:58:42.699525] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:24.787 [2024-12-10 05:58:42.699564] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 05:58:42.699564] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 05:58:42.699566] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-12-10 05:58:42.699565] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:24.787 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:24.787 --proc-type=auto ] 00:33:24.787 --proc-type=auto ] 00:33:25.044 [2024-12-10 05:58:42.887925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.044 [2024-12-10 05:58:42.933281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:25.044 [2024-12-10 05:58:42.985067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.301 [2024-12-10 05:58:43.029697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:25.301 [2024-12-10 05:58:43.074097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.301 [2024-12-10 05:58:43.118399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:25.301 [2024-12-10 05:58:43.171461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.301 [2024-12-10 05:58:43.223343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:25.558 Running I/O for 1 seconds... 00:33:25.558 Running I/O for 1 seconds... 00:33:25.558 Running I/O for 1 seconds... 00:33:25.558 Running I/O for 1 seconds... 00:33:26.488 7819.00 IOPS, 30.54 MiB/s [2024-12-10T04:58:44.447Z] 11754.00 IOPS, 45.91 MiB/s 00:33:26.488 Latency(us) 00:33:26.488 [2024-12-10T04:58:44.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.488 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:26.488 Nvme1n1 : 1.01 7862.01 30.71 0.00 0.00 16212.12 1536.98 23468.13 00:33:26.488 [2024-12-10T04:58:44.447Z] =================================================================================================================== 00:33:26.488 [2024-12-10T04:58:44.447Z] Total : 7862.01 30.71 0.00 0.00 16212.12 1536.98 23468.13 00:33:26.488 00:33:26.488 Latency(us) 00:33:26.488 [2024-12-10T04:58:44.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.488 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:26.488 Nvme1n1 : 1.01 11798.76 46.09 0.00 0.00 10807.68 4306.65 15166.90 00:33:26.488 [2024-12-10T04:58:44.447Z] =================================================================================================================== 00:33:26.488 [2024-12-10T04:58:44.447Z] Total : 11798.76 46.09 0.00 0.00 10807.68 4306.65 15166.90 00:33:26.488 7939.00 IOPS, 31.01 MiB/s 00:33:26.488 Latency(us) 00:33:26.488 [2024-12-10T04:58:44.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.488 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:26.488 Nvme1n1 : 1.01 8084.09 31.58 0.00 0.00 15803.33 2715.06 30084.14 00:33:26.488 [2024-12-10T04:58:44.447Z] =================================================================================================================== 00:33:26.488 [2024-12-10T04:58:44.447Z] Total : 8084.09 31.58 0.00 0.00 15803.33 2715.06 30084.14 00:33:26.488 244712.00 IOPS, 955.91 MiB/s 00:33:26.488 Latency(us) 00:33:26.488 [2024-12-10T04:58:44.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.488 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:26.488 Nvme1n1 : 1.00 244335.42 954.44 0.00 0.00 521.22 224.30 1521.37 00:33:26.488 [2024-12-10T04:58:44.447Z] =================================================================================================================== 00:33:26.488 [2024-12-10T04:58:44.447Z] Total : 244335.42 954.44 0.00 0.00 521.22 224.30 1521.37 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 353751 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 353753 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 353756 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.747 rmmod nvme_tcp 00:33:26.747 rmmod nvme_fabrics 00:33:26.747 rmmod nvme_keyring 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 353507 ']' 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 353507 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 353507 ']' 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 353507 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353507 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353507' 00:33:26.747 killing process with pid 353507 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 353507 00:33:26.747 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 353507 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.007 05:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.913 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.913 00:33:28.913 real 0m12.291s 00:33:28.913 user 0m15.103s 00:33:28.913 sys 0m7.089s 00:33:28.913 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.913 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.913 ************************************ 00:33:28.913 END TEST nvmf_bdev_io_wait 00:33:28.913 ************************************ 00:33:29.172 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:29.173 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:29.173 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.173 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:29.173 ************************************ 00:33:29.173 START TEST nvmf_queue_depth 00:33:29.173 ************************************ 00:33:29.173 05:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:29.173 * Looking for test storage... 00:33:29.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:29.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.173 --rc genhtml_branch_coverage=1 00:33:29.173 --rc genhtml_function_coverage=1 00:33:29.173 --rc genhtml_legend=1 00:33:29.173 --rc geninfo_all_blocks=1 00:33:29.173 --rc geninfo_unexecuted_blocks=1 00:33:29.173 00:33:29.173 ' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:29.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.173 --rc genhtml_branch_coverage=1 00:33:29.173 --rc genhtml_function_coverage=1 00:33:29.173 --rc genhtml_legend=1 00:33:29.173 --rc geninfo_all_blocks=1 00:33:29.173 --rc geninfo_unexecuted_blocks=1 00:33:29.173 00:33:29.173 ' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:29.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.173 --rc genhtml_branch_coverage=1 00:33:29.173 --rc genhtml_function_coverage=1 00:33:29.173 --rc genhtml_legend=1 00:33:29.173 --rc geninfo_all_blocks=1 00:33:29.173 --rc geninfo_unexecuted_blocks=1 00:33:29.173 00:33:29.173 ' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:29.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.173 --rc genhtml_branch_coverage=1 00:33:29.173 --rc genhtml_function_coverage=1 00:33:29.173 --rc genhtml_legend=1 00:33:29.173 --rc geninfo_all_blocks=1 00:33:29.173 --rc geninfo_unexecuted_blocks=1 00:33:29.173 00:33:29.173 ' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.173 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:29.433 05:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.001 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.002 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.002 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.002 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.002 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:33:36.002 00:33:36.002 --- 10.0.0.2 ping statistics --- 00:33:36.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.002 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:33:36.002 00:33:36.002 --- 10.0.0.1 ping statistics --- 00:33:36.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.002 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.002 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=358000 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 358000 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 358000 ']' 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.003 05:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:36.003 [2024-12-10 05:58:53.889979] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:36.003 [2024-12-10 05:58:53.890888] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:36.003 [2024-12-10 05:58:53.890925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.262 [2024-12-10 05:58:53.978516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.262 [2024-12-10 05:58:54.016360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.262 [2024-12-10 05:58:54.016397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.262 [2024-12-10 05:58:54.016404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.262 [2024-12-10 05:58:54.016410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.262 [2024-12-10 05:58:54.016415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.262 [2024-12-10 05:58:54.016940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.262 [2024-12-10 05:58:54.084021] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:36.262 [2024-12-10 05:58:54.084235] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:36.830 [2024-12-10 05:58:54.765668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.830 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.089 Malloc0 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.089 [2024-12-10 05:58:54.845575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=358053 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 358053 /var/tmp/bdevperf.sock 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 358053 ']' 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:37.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.089 05:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.089 [2024-12-10 05:58:54.895639] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:33:37.089 [2024-12-10 05:58:54.895681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358053 ] 00:33:37.089 [2024-12-10 05:58:54.973768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.089 [2024-12-10 05:58:55.014692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.352 NVMe0n1 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.352 05:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:37.352 Running I/O for 10 seconds... 00:33:39.661 11738.00 IOPS, 45.85 MiB/s [2024-12-10T04:58:58.553Z] 12197.00 IOPS, 47.64 MiB/s [2024-12-10T04:58:59.486Z] 12289.33 IOPS, 48.01 MiB/s [2024-12-10T04:59:00.418Z] 12306.25 IOPS, 48.07 MiB/s [2024-12-10T04:59:01.352Z] 12437.80 IOPS, 48.59 MiB/s [2024-12-10T04:59:02.726Z] 12453.67 IOPS, 48.65 MiB/s [2024-12-10T04:59:03.660Z] 12473.14 IOPS, 48.72 MiB/s [2024-12-10T04:59:04.594Z] 12517.12 IOPS, 48.90 MiB/s [2024-12-10T04:59:05.528Z] 12541.56 IOPS, 48.99 MiB/s [2024-12-10T04:59:05.528Z] 12588.20 IOPS, 49.17 MiB/s 00:33:47.569 Latency(us) 00:33:47.569 [2024-12-10T04:59:05.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.569 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:47.569 Verification LBA range: start 0x0 length 0x4000 00:33:47.569 NVMe0n1 : 10.06 12611.16 49.26 0.00 0.00 80946.67 19223.89 53926.77 00:33:47.569 [2024-12-10T04:59:05.528Z] =================================================================================================================== 00:33:47.569 [2024-12-10T04:59:05.528Z] Total : 12611.16 49.26 0.00 0.00 80946.67 19223.89 53926.77 00:33:47.569 { 00:33:47.569 "results": [ 00:33:47.569 { 00:33:47.569 "job": "NVMe0n1", 00:33:47.569 "core_mask": "0x1", 00:33:47.569 "workload": "verify", 00:33:47.569 "status": "finished", 00:33:47.569 "verify_range": { 00:33:47.569 "start": 0, 00:33:47.569 "length": 16384 00:33:47.569 }, 00:33:47.569 "queue_depth": 1024, 00:33:47.569 "io_size": 4096, 00:33:47.569 "runtime": 10.062989, 00:33:47.569 "iops": 12611.163541965514, 00:33:47.569 "mibps": 49.26235758580279, 00:33:47.569 "io_failed": 0, 00:33:47.569 "io_timeout": 0, 00:33:47.569 "avg_latency_us": 80946.66685662353, 00:33:47.569 "min_latency_us": 19223.893333333333, 00:33:47.569 "max_latency_us": 53926.76571428571 00:33:47.569 } 00:33:47.569 ], 00:33:47.569 "core_count": 1 00:33:47.569 } 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 358053 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 358053 ']' 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 358053 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358053 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358053' 00:33:47.569 killing process with pid 358053 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 358053 00:33:47.569 Received shutdown signal, test time was about 10.000000 seconds 00:33:47.569 00:33:47.569 Latency(us) 00:33:47.569 [2024-12-10T04:59:05.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.569 [2024-12-10T04:59:05.528Z] =================================================================================================================== 00:33:47.569 [2024-12-10T04:59:05.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.569 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 358053 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.827 rmmod nvme_tcp 00:33:47.827 rmmod nvme_fabrics 00:33:47.827 rmmod nvme_keyring 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.827 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 358000 ']' 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 358000 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 358000 ']' 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 358000 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358000 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358000' 00:33:47.828 killing process with pid 358000 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 358000 00:33:47.828 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 358000 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.087 05:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.623 05:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.623 00:33:50.623 real 0m21.057s 00:33:50.623 user 0m23.002s 00:33:50.623 sys 0m6.876s 00:33:50.623 05:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.623 05:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:50.623 ************************************ 00:33:50.623 END TEST nvmf_queue_depth 00:33:50.623 ************************************ 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:50.623 ************************************ 00:33:50.623 START TEST nvmf_target_multipath 00:33:50.623 ************************************ 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:50.623 * Looking for test storage... 00:33:50.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.623 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.624 --rc genhtml_branch_coverage=1 00:33:50.624 --rc genhtml_function_coverage=1 00:33:50.624 --rc genhtml_legend=1 00:33:50.624 --rc geninfo_all_blocks=1 00:33:50.624 --rc geninfo_unexecuted_blocks=1 00:33:50.624 00:33:50.624 ' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.624 --rc genhtml_branch_coverage=1 00:33:50.624 --rc genhtml_function_coverage=1 00:33:50.624 --rc genhtml_legend=1 00:33:50.624 --rc geninfo_all_blocks=1 00:33:50.624 --rc geninfo_unexecuted_blocks=1 00:33:50.624 00:33:50.624 ' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.624 --rc genhtml_branch_coverage=1 00:33:50.624 --rc genhtml_function_coverage=1 00:33:50.624 --rc genhtml_legend=1 00:33:50.624 --rc geninfo_all_blocks=1 00:33:50.624 --rc geninfo_unexecuted_blocks=1 00:33:50.624 00:33:50.624 ' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.624 --rc genhtml_branch_coverage=1 00:33:50.624 --rc genhtml_function_coverage=1 00:33:50.624 --rc genhtml_legend=1 00:33:50.624 --rc geninfo_all_blocks=1 00:33:50.624 --rc geninfo_unexecuted_blocks=1 00:33:50.624 00:33:50.624 ' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:50.624 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:50.625 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.625 05:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:57.197 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:57.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:57.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:57.198 Found net devices under 0000:af:00.0: cvl_0_0 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:57.198 Found net devices under 0000:af:00.1: cvl_0_1 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.198 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:33:57.199 00:33:57.199 --- 10.0.0.2 ping statistics --- 00:33:57.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.199 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:33:57.199 00:33:57.199 --- 10.0.0.1 ping statistics --- 00:33:57.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.199 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:57.199 only one NIC for nvmf test 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.199 05:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.199 rmmod nvme_tcp 00:33:57.199 rmmod nvme_fabrics 00:33:57.199 rmmod nvme_keyring 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.199 05:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.736 00:33:59.736 real 0m9.090s 00:33:59.736 user 0m2.040s 00:33:59.736 sys 0m5.078s 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:59.736 ************************************ 00:33:59.736 END TEST nvmf_target_multipath 00:33:59.736 ************************************ 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:59.736 ************************************ 00:33:59.736 START TEST nvmf_zcopy 00:33:59.736 ************************************ 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:59.736 * Looking for test storage... 00:33:59.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.736 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:59.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.737 --rc genhtml_branch_coverage=1 00:33:59.737 --rc genhtml_function_coverage=1 00:33:59.737 --rc genhtml_legend=1 00:33:59.737 --rc geninfo_all_blocks=1 00:33:59.737 --rc geninfo_unexecuted_blocks=1 00:33:59.737 00:33:59.737 ' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:59.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.737 --rc genhtml_branch_coverage=1 00:33:59.737 --rc genhtml_function_coverage=1 00:33:59.737 --rc genhtml_legend=1 00:33:59.737 --rc geninfo_all_blocks=1 00:33:59.737 --rc geninfo_unexecuted_blocks=1 00:33:59.737 00:33:59.737 ' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:59.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.737 --rc genhtml_branch_coverage=1 00:33:59.737 --rc genhtml_function_coverage=1 00:33:59.737 --rc genhtml_legend=1 00:33:59.737 --rc geninfo_all_blocks=1 00:33:59.737 --rc geninfo_unexecuted_blocks=1 00:33:59.737 00:33:59.737 ' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:59.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.737 --rc genhtml_branch_coverage=1 00:33:59.737 --rc genhtml_function_coverage=1 00:33:59.737 --rc genhtml_legend=1 00:33:59.737 --rc geninfo_all_blocks=1 00:33:59.737 --rc geninfo_unexecuted_blocks=1 00:33:59.737 00:33:59.737 ' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:59.737 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:59.738 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:59.738 05:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.308 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:06.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:06.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:06.309 Found net devices under 0000:af:00.0: cvl_0_0 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:06.309 Found net devices under 0000:af:00.1: cvl_0_1 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:06.309 05:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:06.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:34:06.309 00:34:06.309 --- 10.0.0.2 ping statistics --- 00:34:06.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.309 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:34:06.309 00:34:06.309 --- 10.0.0.1 ping statistics --- 00:34:06.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.309 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=367571 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 367571 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 367571 ']' 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.309 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.309 [2024-12-10 05:59:24.124389] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:06.309 [2024-12-10 05:59:24.125287] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:34:06.310 [2024-12-10 05:59:24.125321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.310 [2024-12-10 05:59:24.208513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.310 [2024-12-10 05:59:24.247492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.310 [2024-12-10 05:59:24.247525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.310 [2024-12-10 05:59:24.247532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.310 [2024-12-10 05:59:24.247538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.310 [2024-12-10 05:59:24.247543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.310 [2024-12-10 05:59:24.248087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.568 [2024-12-10 05:59:24.315038] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:06.568 [2024-12-10 05:59:24.315285] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 [2024-12-10 05:59:24.384782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 [2024-12-10 05:59:24.412982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 malloc0 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.568 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.569 { 00:34:06.569 "params": { 00:34:06.569 "name": "Nvme$subsystem", 00:34:06.569 "trtype": "$TEST_TRANSPORT", 00:34:06.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.569 "adrfam": "ipv4", 00:34:06.569 "trsvcid": "$NVMF_PORT", 00:34:06.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.569 "hdgst": ${hdgst:-false}, 00:34:06.569 "ddgst": ${ddgst:-false} 00:34:06.569 }, 00:34:06.569 "method": "bdev_nvme_attach_controller" 00:34:06.569 } 00:34:06.569 EOF 00:34:06.569 )") 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:06.569 05:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:06.569 "params": { 00:34:06.569 "name": "Nvme1", 00:34:06.569 "trtype": "tcp", 00:34:06.569 "traddr": "10.0.0.2", 00:34:06.569 "adrfam": "ipv4", 00:34:06.569 "trsvcid": "4420", 00:34:06.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.569 "hdgst": false, 00:34:06.569 "ddgst": false 00:34:06.569 }, 00:34:06.569 "method": "bdev_nvme_attach_controller" 00:34:06.569 }' 00:34:06.569 [2024-12-10 05:59:24.507222] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:34:06.569 [2024-12-10 05:59:24.507275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367631 ] 00:34:06.827 [2024-12-10 05:59:24.585015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.827 [2024-12-10 05:59:24.624459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.085 Running I/O for 10 seconds... 00:34:09.394 8550.00 IOPS, 66.80 MiB/s [2024-12-10T04:59:28.286Z] 8618.50 IOPS, 67.33 MiB/s [2024-12-10T04:59:29.220Z] 8635.33 IOPS, 67.46 MiB/s [2024-12-10T04:59:30.154Z] 8658.25 IOPS, 67.64 MiB/s [2024-12-10T04:59:31.088Z] 8663.80 IOPS, 67.69 MiB/s [2024-12-10T04:59:32.023Z] 8649.00 IOPS, 67.57 MiB/s [2024-12-10T04:59:33.398Z] 8656.71 IOPS, 67.63 MiB/s [2024-12-10T04:59:34.333Z] 8654.25 IOPS, 67.61 MiB/s [2024-12-10T04:59:35.272Z] 8661.33 IOPS, 67.67 MiB/s [2024-12-10T04:59:35.272Z] 8662.70 IOPS, 67.68 MiB/s 00:34:17.313 Latency(us) 00:34:17.313 [2024-12-10T04:59:35.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.313 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:17.313 Verification LBA range: start 0x0 length 0x1000 00:34:17.313 Nvme1n1 : 10.01 8665.52 67.70 0.00 0.00 14729.44 2356.18 21221.18 00:34:17.313 [2024-12-10T04:59:35.272Z] =================================================================================================================== 00:34:17.313 [2024-12-10T04:59:35.272Z] Total : 8665.52 67.70 0.00 0.00 14729.44 2356.18 21221.18 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=369212 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.313 { 00:34:17.313 "params": { 00:34:17.313 "name": "Nvme$subsystem", 00:34:17.313 "trtype": "$TEST_TRANSPORT", 00:34:17.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.313 "adrfam": "ipv4", 00:34:17.313 "trsvcid": "$NVMF_PORT", 00:34:17.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.313 "hdgst": ${hdgst:-false}, 00:34:17.313 "ddgst": ${ddgst:-false} 00:34:17.313 }, 00:34:17.313 "method": "bdev_nvme_attach_controller" 00:34:17.313 } 00:34:17.313 EOF 00:34:17.313 )") 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:17.313 [2024-12-10 05:59:35.136442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.136475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:17.313 05:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.313 "params": { 00:34:17.313 "name": "Nvme1", 00:34:17.313 "trtype": "tcp", 00:34:17.313 "traddr": "10.0.0.2", 00:34:17.313 "adrfam": "ipv4", 00:34:17.313 "trsvcid": "4420", 00:34:17.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.313 "hdgst": false, 00:34:17.313 "ddgst": false 00:34:17.313 }, 00:34:17.313 "method": "bdev_nvme_attach_controller" 00:34:17.313 }' 00:34:17.313 [2024-12-10 05:59:35.148405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.148417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.160402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.160411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.172402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.172410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.174114] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:34:17.313 [2024-12-10 05:59:35.174154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369212 ] 00:34:17.313 [2024-12-10 05:59:35.184402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.184412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.196402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.196412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.208403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.208412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.220401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.220411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.232400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.232409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.244400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.244410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.313 [2024-12-10 05:59:35.253961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.313 [2024-12-10 05:59:35.256403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.313 [2024-12-10 05:59:35.256413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.268442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.268467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.280435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.280455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.292402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.292412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.294789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.572 [2024-12-10 05:59:35.304406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.304418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.316414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.316434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.328407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.328422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.340402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.340413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.352403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.352414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.364401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.364412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.376408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.376425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.388408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.388425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.400408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.400422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.412407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.412422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.424404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.424417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.436401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.436411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.448400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.448410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.460404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.460418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.472401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.472410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.484400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.484410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.496401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.496410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.508404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.508417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.572 [2024-12-10 05:59:35.520401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.572 [2024-12-10 05:59:35.520410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.532412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.532430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.544403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.544415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.556408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.556426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 Running I/O for 5 seconds... 00:34:17.830 [2024-12-10 05:59:35.568436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.568452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.585056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.585076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.599944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.599963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.613838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.613858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.628254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.628272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.639520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.639538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.654241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.654259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.668950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.668967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.683746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.683764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.698814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.698833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.713169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.713186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.725386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.725404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.740246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.740264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.751243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.751261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.830 [2024-12-10 05:59:35.766016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.830 [2024-12-10 05:59:35.766033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.831 [2024-12-10 05:59:35.781180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.831 [2024-12-10 05:59:35.781200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.796316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.796335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.807751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.807770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.821909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.821927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.836793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.836810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.852855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.852872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.868360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.868378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.882036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.882054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.896752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.896770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.912319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.912344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.925203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.925228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.937903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.937922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.952429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.952447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.965281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.965299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.979905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.979923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:35.993383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:35.993401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:36.007860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:36.007878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:36.021268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:36.021286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.089 [2024-12-10 05:59:36.036052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.089 [2024-12-10 05:59:36.036070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.050398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.050418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.065116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.065139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.080925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.080943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.095944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.095963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.110022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.110039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.124692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.124709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.140490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.140508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.154246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.154265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.169421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.169440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.184446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.184471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.197797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.197815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.212432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.212450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.226185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.226204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.240648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.240666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.251239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.251257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.265654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.265672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.279835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.279854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.347 [2024-12-10 05:59:36.293736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.347 [2024-12-10 05:59:36.293754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.304667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.304685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.318340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.318358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.333176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.333194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.347807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.347825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.362471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.362489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.376847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.376864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.389015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.389032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.404758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.404777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.420163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.420181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.432504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.432522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.446282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.446305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.461092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.461109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.476369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.476387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.490076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.490094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.504561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.504579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.516409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.516427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.529944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.529962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.544468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.544485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.605 [2024-12-10 05:59:36.557236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.605 [2024-12-10 05:59:36.557259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 16838.00 IOPS, 131.55 MiB/s [2024-12-10T04:59:36.822Z] [2024-12-10 05:59:36.571904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.571925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.586017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.586035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.600861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.600878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.616401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.616423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.629465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.629482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.644249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.644269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.658393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.658411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.673014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.673033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.688828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.688846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.704364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.704384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.717657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.717675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.728735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.728752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.741682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.741700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.756846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.756863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.772434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.772452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.784593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.784612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.798586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.798605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.863 [2024-12-10 05:59:36.813583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.863 [2024-12-10 05:59:36.813602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.828831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.828849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.844484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.844504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.858583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.858601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.872736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.872754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.888397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.888417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.901349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.901367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.913064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.913081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.926356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.926375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.941308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.941326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.956454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.956473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.968869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.968886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.984270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.984289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:36.998388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:36.998406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:37.013418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:37.013436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:37.027860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:37.027879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:37.042632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:37.042649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:37.056628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:37.056647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.122 [2024-12-10 05:59:37.067826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.122 [2024-12-10 05:59:37.067843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.380 [2024-12-10 05:59:37.082543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.380 [2024-12-10 05:59:37.082564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.380 [2024-12-10 05:59:37.097312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.380 [2024-12-10 05:59:37.097331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.111919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.111938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.126304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.126322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.141512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.141529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.156241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.156260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.168932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.168950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.184562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.184580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.196890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.196906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.209837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.209855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.224705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.224722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.240091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.240109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.254602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.254620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.269306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.269324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.285252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.285270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.295720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.295738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.310410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.310428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.381 [2024-12-10 05:59:37.324689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.381 [2024-12-10 05:59:37.324707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.340687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.340706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.356643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.356662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.367812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.367830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.381854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.381872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.396012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.396030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.409435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.409454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.424306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.424325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.438266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.438286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.452838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.452855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.468107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.468125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.482524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.482543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.496766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.496783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.509382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.509404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.522181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.522199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.536694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.536711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.550182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.550200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-12-10 05:59:37.565054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.565072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 16843.50 IOPS, 131.59 MiB/s [2024-12-10T04:59:37.598Z] [2024-12-10 05:59:37.580164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-12-10 05:59:37.580183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.594361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.594382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.609021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.609040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.624437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.624455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.637478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.637496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.652775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.652793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.668663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.668681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.681622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.681640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.692516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.692534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.706553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.706571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.720797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.720814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.736611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.736630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.749484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.749502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.764135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.764153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.776942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.776965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.789961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.789979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.804398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.804416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.817139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.817156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.832506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.832524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.898 [2024-12-10 05:59:37.845244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.898 [2024-12-10 05:59:37.845261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.860594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.860614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.871696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.871715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.886085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.886103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.900672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.900689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.916150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.916168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.930242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.930259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.945006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.945023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.960452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.960470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.971707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.971724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:37.986733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:37.986750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.001058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.001075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.013096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.013113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.027647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.027665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.042261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.042284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.056842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.056860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.072214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.072237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.085045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.085064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.157 [2024-12-10 05:59:38.100101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.157 [2024-12-10 05:59:38.100120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.115072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.115094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.129855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.129875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.144255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.144275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.156892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.156911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.170438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.170457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.185037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.185055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.198170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.198188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.212981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.213000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.228361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.228379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.241869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.241888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.256352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.256370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.269272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.269291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.284260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.284278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.297187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.297205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.311912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.311936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.324673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.324691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.338431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.338449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.415 [2024-12-10 05:59:38.353382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.415 [2024-12-10 05:59:38.353400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.368883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.368906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.384529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.384550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.398302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.398321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.413216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.413243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.428880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.428898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.440074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.440092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.454615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.454634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.469154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.469174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.484896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.484914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.500097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.500116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.514279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.514298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.529045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.529063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.544406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.544424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.557364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.557381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.572309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.572327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 16856.00 IOPS, 131.69 MiB/s [2024-12-10T04:59:38.633Z] [2024-12-10 05:59:38.585696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.585714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.600129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.600148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.674 [2024-12-10 05:59:38.613312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.674 [2024-12-10 05:59:38.613329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.628852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.628871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.643480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.643499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.657709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.657728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.672188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.672206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.685731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.685749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.696637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.696654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.710116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.710133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.725101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.725119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.740940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.740958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.756403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.756421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.770425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.770443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.785386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.785404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.800479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.800497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.811552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.811570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.826447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.826465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.840892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.840910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.856601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.856620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.870087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.870104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.933 [2024-12-10 05:59:38.884564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.933 [2024-12-10 05:59:38.884583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.896831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.896849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.909954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.909973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.924649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.924667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.935017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.935035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.949499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.949517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.964254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.964272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.978458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.978476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:38.993474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:38.993492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.008110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.008128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.021429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.021447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.036401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.036420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.048790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.048807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.064232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.064250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.078340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.078357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.092782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.092799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.108326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.108349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.122236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.122255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.192 [2024-12-10 05:59:39.137054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.192 [2024-12-10 05:59:39.137071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.152431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.152452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.165231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.165249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.180110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.180128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.192799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.192816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.206023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.206040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.221047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.221064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.236653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.236671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.250296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.250313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.265086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.265103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.280161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.280178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.294357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.294375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.309092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.309109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.323824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.323842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.337800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.337818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.352854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.352871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.368286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.368304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.381430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.381451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.451 [2024-12-10 05:59:39.396348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.451 [2024-12-10 05:59:39.396366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.410173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.410192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.424628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.424647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.437427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.437444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.452313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.452333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.465826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.465845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.480284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.480303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.494026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.494045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.508941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.508959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.524303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.524323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.709 [2024-12-10 05:59:39.537418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.709 [2024-12-10 05:59:39.537437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.552114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.552134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.566240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.566259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 16875.75 IOPS, 131.84 MiB/s [2024-12-10T04:59:39.669Z] [2024-12-10 05:59:39.581047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.581065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.596410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.596428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.608406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.608425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.622627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.622645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.636892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.636910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.710 [2024-12-10 05:59:39.652188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.710 [2024-12-10 05:59:39.652211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.666000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.666021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.680624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.680643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.693237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.693255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.708085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.708104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.719422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.719441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.734091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.734110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.748717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.748735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.764151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.764169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.778223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.778241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.792883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.792902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.808064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.808082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.822594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.822612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.837134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.837152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.852137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.852155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.866133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.866151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.880575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.880593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.891289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.891307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.905613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.905631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.968 [2024-12-10 05:59:39.920650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.968 [2024-12-10 05:59:39.920670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:39.934343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:39.934364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:39.949246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:39.949265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:39.964908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:39.964925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:39.976886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:39.976903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:39.990020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:39.990037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.004815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.004832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.020183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.020203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.033394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.033422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.048634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.048654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.059971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.059989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.073896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.073914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.089561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.089580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.104295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.104313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.117077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.117095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.132448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.132466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.146517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.146535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.161225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.161243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.227 [2024-12-10 05:59:40.176213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.227 [2024-12-10 05:59:40.176241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.190046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.190066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.205046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.205064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.220885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.220903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.235989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.236007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.248224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.248242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.262818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.262836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.277591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.277609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.292073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.292092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.305367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.305385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.320614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.320633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.331137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.331154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.345727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.345744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.360212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.360236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.373922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.373940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.388645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.388663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.399122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.399140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.413727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.413746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.486 [2024-12-10 05:59:40.428351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.486 [2024-12-10 05:59:40.428369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.442978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.442997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.457471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.457489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.472748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.472765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.488779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.488797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.504432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.504451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.518511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.518531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.532535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.532553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.545051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.545069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.559694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.559712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.574416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.574435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 16866.20 IOPS, 131.77 MiB/s [2024-12-10T04:59:40.704Z] [2024-12-10 05:59:40.587375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.587393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 00:34:22.745 Latency(us) 00:34:22.745 [2024-12-10T04:59:40.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.745 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:22.745 Nvme1n1 : 5.01 16865.93 131.77 0.00 0.00 7581.71 2153.33 13544.11 00:34:22.745 [2024-12-10T04:59:40.704Z] =================================================================================================================== 00:34:22.745 [2024-12-10T04:59:40.704Z] Total : 16865.93 131.77 0.00 0.00 7581.71 2153.33 13544.11 00:34:22.745 [2024-12-10 05:59:40.596407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.596423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.608407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.608421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.620418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.620434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.632407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.632423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.644408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.644421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.656404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.656424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.668403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.668417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.680404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.680416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.745 [2024-12-10 05:59:40.692408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.745 [2024-12-10 05:59:40.692422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.003 [2024-12-10 05:59:40.704420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.003 [2024-12-10 05:59:40.704439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.003 [2024-12-10 05:59:40.716403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.003 [2024-12-10 05:59:40.716414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.003 [2024-12-10 05:59:40.728405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.003 [2024-12-10 05:59:40.728416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.003 [2024-12-10 05:59:40.740402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.003 [2024-12-10 05:59:40.740411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (369212) - No such process 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 369212 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:23.003 delay0 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:23.003 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.004 05:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:23.004 [2024-12-10 05:59:40.888164] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:29.630 Initializing NVMe Controllers 00:34:29.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:29.630 Initialization complete. Launching workers. 00:34:29.630 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3268 00:34:29.630 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3540, failed to submit 48 00:34:29.630 success 3400, unsuccessful 140, failed 0 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.630 rmmod nvme_tcp 00:34:29.630 rmmod nvme_fabrics 00:34:29.630 rmmod nvme_keyring 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 367571 ']' 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 367571 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 367571 ']' 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 367571 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367571 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367571' 00:34:29.630 killing process with pid 367571 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 367571 00:34:29.630 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 367571 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.926 05:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.480 00:34:32.480 real 0m32.616s 00:34:32.480 user 0m41.266s 00:34:32.480 sys 0m13.385s 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:32.480 ************************************ 00:34:32.480 END TEST nvmf_zcopy 00:34:32.480 ************************************ 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.480 ************************************ 00:34:32.480 START TEST nvmf_nmic 00:34:32.480 ************************************ 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:32.480 * Looking for test storage... 00:34:32.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:34:32.480 05:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.480 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:32.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.481 --rc genhtml_branch_coverage=1 00:34:32.481 --rc genhtml_function_coverage=1 00:34:32.481 --rc genhtml_legend=1 00:34:32.481 --rc geninfo_all_blocks=1 00:34:32.481 --rc geninfo_unexecuted_blocks=1 00:34:32.481 00:34:32.481 ' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:32.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.481 --rc genhtml_branch_coverage=1 00:34:32.481 --rc genhtml_function_coverage=1 00:34:32.481 --rc genhtml_legend=1 00:34:32.481 --rc geninfo_all_blocks=1 00:34:32.481 --rc geninfo_unexecuted_blocks=1 00:34:32.481 00:34:32.481 ' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:32.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.481 --rc genhtml_branch_coverage=1 00:34:32.481 --rc genhtml_function_coverage=1 00:34:32.481 --rc genhtml_legend=1 00:34:32.481 --rc geninfo_all_blocks=1 00:34:32.481 --rc geninfo_unexecuted_blocks=1 00:34:32.481 00:34:32.481 ' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:32.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.481 --rc genhtml_branch_coverage=1 00:34:32.481 --rc genhtml_function_coverage=1 00:34:32.481 --rc genhtml_legend=1 00:34:32.481 --rc geninfo_all_blocks=1 00:34:32.481 --rc geninfo_unexecuted_blocks=1 00:34:32.481 00:34:32.481 ' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.481 05:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:39.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:39.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:39.053 Found net devices under 0000:af:00.0: cvl_0_0 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:39.053 Found net devices under 0000:af:00.1: cvl_0_1 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:39.053 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:39.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:34:39.053 00:34:39.053 --- 10.0.0.2 ping statistics --- 00:34:39.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.054 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:34:39.054 00:34:39.054 --- 10.0.0.1 ping statistics --- 00:34:39.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.054 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=375031 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 375031 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 375031 ']' 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.054 05:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.054 [2024-12-10 05:59:56.853670] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:39.054 [2024-12-10 05:59:56.854602] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:34:39.054 [2024-12-10 05:59:56.854637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.054 [2024-12-10 05:59:56.940745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.054 [2024-12-10 05:59:56.982740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.054 [2024-12-10 05:59:56.982781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.054 [2024-12-10 05:59:56.982787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.054 [2024-12-10 05:59:56.982794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.054 [2024-12-10 05:59:56.982799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.054 [2024-12-10 05:59:56.984305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.054 [2024-12-10 05:59:56.984414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:39.054 [2024-12-10 05:59:56.984517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.054 [2024-12-10 05:59:56.984518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:39.313 [2024-12-10 05:59:57.052179] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:39.313 [2024-12-10 05:59:57.052979] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:39.313 [2024-12-10 05:59:57.053266] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:39.313 [2024-12-10 05:59:57.053667] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:39.313 [2024-12-10 05:59:57.053710] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 [2024-12-10 05:59:57.737199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 Malloc0 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 [2024-12-10 05:59:57.817458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:39.881 test case1: single bdev can't be used in multiple subsystems 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:39.881 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:40.139 [2024-12-10 05:59:57.844883] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:40.139 [2024-12-10 05:59:57.844903] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:40.139 [2024-12-10 05:59:57.844911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.139 request: 00:34:40.139 { 00:34:40.139 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:40.139 "namespace": { 00:34:40.139 "bdev_name": "Malloc0", 00:34:40.139 "no_auto_visible": false, 00:34:40.139 "hide_metadata": false 00:34:40.139 }, 00:34:40.139 "method": "nvmf_subsystem_add_ns", 00:34:40.139 "req_id": 1 00:34:40.139 } 00:34:40.139 Got JSON-RPC error response 00:34:40.139 response: 00:34:40.139 { 00:34:40.139 "code": -32602, 00:34:40.139 "message": "Invalid parameters" 00:34:40.139 } 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:40.139 Adding namespace failed - expected result. 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:40.139 test case2: host connect to nvmf target in multiple paths 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:40.139 [2024-12-10 05:59:57.856980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.139 05:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:40.139 05:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:40.396 05:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:40.396 05:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:40.397 05:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:40.397 05:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:40.397 05:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:42.918 06:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:42.918 [global] 00:34:42.918 thread=1 00:34:42.918 invalidate=1 00:34:42.918 rw=write 00:34:42.918 time_based=1 00:34:42.918 runtime=1 00:34:42.918 ioengine=libaio 00:34:42.918 direct=1 00:34:42.918 bs=4096 00:34:42.918 iodepth=1 00:34:42.918 norandommap=0 00:34:42.918 numjobs=1 00:34:42.918 00:34:42.918 verify_dump=1 00:34:42.918 verify_backlog=512 00:34:42.918 verify_state_save=0 00:34:42.918 do_verify=1 00:34:42.918 verify=crc32c-intel 00:34:42.918 [job0] 00:34:42.918 filename=/dev/nvme0n1 00:34:42.918 Could not set queue depth (nvme0n1) 00:34:42.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.918 fio-3.35 00:34:42.918 Starting 1 thread 00:34:44.289 00:34:44.289 job0: (groupid=0, jobs=1): err= 0: pid=375902: Tue Dec 10 06:00:01 2024 00:34:44.289 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:34:44.289 slat (nsec): min=10099, max=29125, avg=21147.00, stdev=5266.82 00:34:44.289 clat (usec): min=40805, max=41390, avg=40988.01, stdev=121.43 00:34:44.289 lat (usec): min=40830, max=41400, avg=41009.16, stdev=118.76 00:34:44.289 clat percentiles (usec): 00:34:44.289 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:44.289 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:44.289 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:44.289 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:44.289 | 99.99th=[41157] 00:34:44.289 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:44.289 slat (usec): min=10, max=27451, avg=65.54, stdev=1212.67 00:34:44.289 clat (usec): min=133, max=294, avg=234.23, stdev=24.77 00:34:44.289 lat (usec): min=144, max=27667, avg=299.77, stdev=1212.10 00:34:44.289 clat percentiles (usec): 00:34:44.289 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 235], 20.00th=[ 237], 00:34:44.289 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 241], 00:34:44.289 | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 245], 00:34:44.289 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 293], 00:34:44.289 | 99.99th=[ 293] 00:34:44.289 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.289 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.289 lat (usec) : 250=94.00%, 500=2.06% 00:34:44.289 lat (msec) : 50=3.94% 00:34:44.289 cpu : usr=0.39%, sys=0.98%, ctx=536, majf=0, minf=1 00:34:44.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.289 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.289 00:34:44.289 Run status group 0 (all jobs): 00:34:44.289 READ: bw=82.6KiB/s (84.6kB/s), 82.6KiB/s-82.6KiB/s (84.6kB/s-84.6kB/s), io=84.0KiB (86.0kB), run=1017-1017msec 00:34:44.289 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:34:44.289 00:34:44.289 Disk stats (read/write): 00:34:44.289 nvme0n1: ios=44/512, merge=0/0, ticks=1724/117, in_queue=1841, util=98.60% 00:34:44.289 06:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:44.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.289 rmmod nvme_tcp 00:34:44.289 rmmod nvme_fabrics 00:34:44.289 rmmod nvme_keyring 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 375031 ']' 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 375031 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 375031 ']' 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 375031 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375031 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375031' 00:34:44.289 killing process with pid 375031 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 375031 00:34:44.289 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 375031 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.548 06:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.079 00:34:47.079 real 0m14.595s 00:34:47.079 user 0m25.037s 00:34:47.079 sys 0m6.724s 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:47.079 ************************************ 00:34:47.079 END TEST nvmf_nmic 00:34:47.079 ************************************ 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.079 ************************************ 00:34:47.079 START TEST nvmf_fio_target 00:34:47.079 ************************************ 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:47.079 * Looking for test storage... 00:34:47.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:47.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.079 --rc genhtml_branch_coverage=1 00:34:47.079 --rc genhtml_function_coverage=1 00:34:47.079 --rc genhtml_legend=1 00:34:47.079 --rc geninfo_all_blocks=1 00:34:47.079 --rc geninfo_unexecuted_blocks=1 00:34:47.079 00:34:47.079 ' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:47.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.079 --rc genhtml_branch_coverage=1 00:34:47.079 --rc genhtml_function_coverage=1 00:34:47.079 --rc genhtml_legend=1 00:34:47.079 --rc geninfo_all_blocks=1 00:34:47.079 --rc geninfo_unexecuted_blocks=1 00:34:47.079 00:34:47.079 ' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:47.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.079 --rc genhtml_branch_coverage=1 00:34:47.079 --rc genhtml_function_coverage=1 00:34:47.079 --rc genhtml_legend=1 00:34:47.079 --rc geninfo_all_blocks=1 00:34:47.079 --rc geninfo_unexecuted_blocks=1 00:34:47.079 00:34:47.079 ' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:47.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.079 --rc genhtml_branch_coverage=1 00:34:47.079 --rc genhtml_function_coverage=1 00:34:47.079 --rc genhtml_legend=1 00:34:47.079 --rc geninfo_all_blocks=1 00:34:47.079 --rc geninfo_unexecuted_blocks=1 00:34:47.079 00:34:47.079 ' 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.079 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.080 06:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:53.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:53.647 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:53.647 Found net devices under 0000:af:00.0: cvl_0_0 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.647 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:53.647 Found net devices under 0000:af:00.1: cvl_0_1 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:34:53.648 00:34:53.648 --- 10.0.0.2 ping statistics --- 00:34:53.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.648 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:34:53.648 00:34:53.648 --- 10.0.0.1 ping statistics --- 00:34:53.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.648 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=380477 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 380477 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 380477 ']' 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.648 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.907 [2024-12-10 06:00:11.633963] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:53.907 [2024-12-10 06:00:11.634837] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:34:53.907 [2024-12-10 06:00:11.634866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.907 [2024-12-10 06:00:11.718446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:53.907 [2024-12-10 06:00:11.757520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.907 [2024-12-10 06:00:11.757556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.907 [2024-12-10 06:00:11.757562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.907 [2024-12-10 06:00:11.757568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.907 [2024-12-10 06:00:11.757573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.907 [2024-12-10 06:00:11.759108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.907 [2024-12-10 06:00:11.759236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.907 [2024-12-10 06:00:11.759329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.907 [2024-12-10 06:00:11.759330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.907 [2024-12-10 06:00:11.826794] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:53.907 [2024-12-10 06:00:11.827387] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:53.907 [2024-12-10 06:00:11.827787] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:53.907 [2024-12-10 06:00:11.828185] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:53.907 [2024-12-10 06:00:11.828234] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.165 06:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:54.165 [2024-12-10 06:00:12.076114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.165 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.423 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:54.423 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.681 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:54.681 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.939 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:54.939 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.197 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:55.197 06:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:55.456 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.456 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:55.456 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.715 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:55.715 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.972 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:55.972 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:56.230 06:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:56.230 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:56.230 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.487 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:56.487 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:56.744 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.001 [2024-12-10 06:00:14.760052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.001 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:57.260 06:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:57.260 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:57.518 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:57.518 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:57.518 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:57.518 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:57.518 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:57.518 06:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:59.421 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:59.421 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:59.421 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:59.678 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:59.678 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:59.678 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:59.678 06:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:59.678 [global] 00:34:59.678 thread=1 00:34:59.678 invalidate=1 00:34:59.678 rw=write 00:34:59.678 time_based=1 00:34:59.678 runtime=1 00:34:59.678 ioengine=libaio 00:34:59.678 direct=1 00:34:59.678 bs=4096 00:34:59.678 iodepth=1 00:34:59.678 norandommap=0 00:34:59.678 numjobs=1 00:34:59.678 00:34:59.678 verify_dump=1 00:34:59.678 verify_backlog=512 00:34:59.678 verify_state_save=0 00:34:59.678 do_verify=1 00:34:59.678 verify=crc32c-intel 00:34:59.678 [job0] 00:34:59.678 filename=/dev/nvme0n1 00:34:59.678 [job1] 00:34:59.678 filename=/dev/nvme0n2 00:34:59.678 [job2] 00:34:59.678 filename=/dev/nvme0n3 00:34:59.678 [job3] 00:34:59.678 filename=/dev/nvme0n4 00:34:59.678 Could not set queue depth (nvme0n1) 00:34:59.678 Could not set queue depth (nvme0n2) 00:34:59.678 Could not set queue depth (nvme0n3) 00:34:59.678 Could not set queue depth (nvme0n4) 00:34:59.935 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.935 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.935 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.935 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.935 fio-3.35 00:34:59.935 Starting 4 threads 00:35:01.305 00:35:01.305 job0: (groupid=0, jobs=1): err= 0: pid=381704: Tue Dec 10 06:00:19 2024 00:35:01.305 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:35:01.305 slat (nsec): min=12029, max=23151, avg=20994.05, stdev=2786.33 00:35:01.305 clat (usec): min=40892, max=41043, avg=40966.48, stdev=37.36 00:35:01.305 lat (usec): min=40914, max=41055, avg=40987.47, stdev=36.50 00:35:01.305 clat percentiles (usec): 00:35:01.305 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:01.305 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:01.305 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:01.305 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:01.305 | 99.99th=[41157] 00:35:01.305 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:35:01.305 slat (nsec): min=12673, max=45021, avg=14022.30, stdev=2143.57 00:35:01.305 clat (usec): min=149, max=323, avg=188.11, stdev=14.50 00:35:01.306 lat (usec): min=162, max=368, avg=202.13, stdev=15.09 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:35:01.306 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 190], 00:35:01.306 | 70.00th=[ 194], 80.00th=[ 196], 90.00th=[ 200], 95.00th=[ 204], 00:35:01.306 | 99.00th=[ 225], 99.50th=[ 253], 99.90th=[ 322], 99.95th=[ 322], 00:35:01.306 | 99.99th=[ 322] 00:35:01.306 bw ( KiB/s): min= 4096, max= 4096, per=37.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.306 lat (usec) : 250=95.32%, 500=0.56% 00:35:01.306 lat (msec) : 50=4.12% 00:35:01.306 cpu : usr=0.50%, sys=0.99%, ctx=536, majf=0, minf=1 00:35:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.306 job1: (groupid=0, jobs=1): err= 0: pid=381705: Tue Dec 10 06:00:19 2024 00:35:01.306 read: IOPS=41, BW=165KiB/s (169kB/s)(168KiB/1017msec) 00:35:01.306 slat (nsec): min=7330, max=23211, avg=15472.93, stdev=7046.55 00:35:01.306 clat (usec): min=208, max=42023, avg=21538.63, stdev=20533.46 00:35:01.306 lat (usec): min=217, max=42046, avg=21554.10, stdev=20533.01 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 241], 20.00th=[ 251], 00:35:01.306 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[38011], 60.00th=[40633], 00:35:01.306 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:01.306 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:01.306 | 99.99th=[42206] 00:35:01.306 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:35:01.306 slat (nsec): min=9491, max=42774, avg=10750.56, stdev=2011.48 00:35:01.306 clat (usec): min=127, max=443, avg=203.54, stdev=25.04 00:35:01.306 lat (usec): min=137, max=486, avg=214.29, stdev=25.60 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[ 147], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 190], 00:35:01.306 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:35:01.306 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 241], 95.00th=[ 245], 00:35:01.306 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 445], 99.95th=[ 445], 00:35:01.306 | 99.99th=[ 445] 00:35:01.306 bw ( KiB/s): min= 4096, max= 4096, per=37.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.306 lat (usec) : 250=91.52%, 500=4.51% 00:35:01.306 lat (msec) : 50=3.97% 00:35:01.306 cpu : usr=0.20%, sys=0.59%, ctx=554, majf=0, minf=3 00:35:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 issued rwts: total=42,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.306 job2: (groupid=0, jobs=1): err= 0: pid=381706: Tue Dec 10 06:00:19 2024 00:35:01.306 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:35:01.306 slat (nsec): min=6627, max=28614, avg=7782.88, stdev=2112.00 00:35:01.306 clat (usec): min=182, max=41092, avg=753.56, stdev=4737.59 00:35:01.306 lat (usec): min=190, max=41102, avg=761.35, stdev=4739.29 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:35:01.306 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 196], 00:35:01.306 | 70.00th=[ 198], 80.00th=[ 200], 90.00th=[ 204], 95.00th=[ 212], 00:35:01.306 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:01.306 | 99.99th=[41157] 00:35:01.306 write: IOPS=1245, BW=4983KiB/s (5103kB/s)(4988KiB/1001msec); 0 zone resets 00:35:01.306 slat (nsec): min=8995, max=41816, avg=10635.95, stdev=1612.09 00:35:01.306 clat (usec): min=129, max=484, avg=161.70, stdev=28.99 00:35:01.306 lat (usec): min=139, max=526, avg=172.34, stdev=29.20 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:35:01.306 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 169], 00:35:01.306 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 204], 00:35:01.306 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 355], 99.95th=[ 486], 00:35:01.306 | 99.99th=[ 486] 00:35:01.306 bw ( KiB/s): min= 4096, max= 4096, per=37.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.306 lat (usec) : 250=99.21%, 500=0.18% 00:35:01.306 lat (msec) : 50=0.62% 00:35:01.306 cpu : usr=0.90%, sys=2.40%, ctx=2271, majf=0, minf=2 00:35:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 issued rwts: total=1024,1247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.306 job3: (groupid=0, jobs=1): err= 0: pid=381707: Tue Dec 10 06:00:19 2024 00:35:01.306 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:35:01.306 slat (nsec): min=10200, max=24436, avg=21566.00, stdev=2612.71 00:35:01.306 clat (usec): min=40859, max=41425, avg=40983.34, stdev=111.17 00:35:01.306 lat (usec): min=40881, max=41435, avg=41004.90, stdev=108.88 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:01.306 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:01.306 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:01.306 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:01.306 | 99.99th=[41681] 00:35:01.306 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:35:01.306 slat (nsec): min=10499, max=39069, avg=12745.99, stdev=1921.44 00:35:01.306 clat (usec): min=132, max=335, avg=200.93, stdev=22.82 00:35:01.306 lat (usec): min=143, max=375, avg=213.68, stdev=23.29 00:35:01.306 clat percentiles (usec): 00:35:01.306 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 180], 20.00th=[ 188], 00:35:01.306 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:35:01.306 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 245], 00:35:01.306 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 338], 99.95th=[ 338], 00:35:01.306 | 99.99th=[ 338] 00:35:01.306 bw ( KiB/s): min= 4096, max= 4096, per=37.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.306 lat (usec) : 250=92.51%, 500=3.37% 00:35:01.306 lat (msec) : 50=4.12% 00:35:01.306 cpu : usr=0.40%, sys=0.99%, ctx=534, majf=0, minf=1 00:35:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.306 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.306 00:35:01.306 Run status group 0 (all jobs): 00:35:01.306 READ: bw=4366KiB/s (4471kB/s), 86.9KiB/s-4092KiB/s (89.0kB/s-4190kB/s), io=4440KiB (4547kB), run=1001-1017msec 00:35:01.306 WRITE: bw=10.7MiB/s (11.2MB/s), 2014KiB/s-4983KiB/s (2062kB/s-5103kB/s), io=10.9MiB (11.4MB), run=1001-1017msec 00:35:01.306 00:35:01.306 Disk stats (read/write): 00:35:01.306 nvme0n1: ios=42/512, merge=0/0, ticks=1642/94, in_queue=1736, util=100.00% 00:35:01.306 nvme0n2: ios=31/512, merge=0/0, ticks=663/102, in_queue=765, util=81.51% 00:35:01.306 nvme0n3: ios=512/546, merge=0/0, ticks=672/99, in_queue=771, util=86.89% 00:35:01.306 nvme0n4: ios=16/512, merge=0/0, ticks=657/97, in_queue=754, util=88.95% 00:35:01.306 06:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:01.306 [global] 00:35:01.306 thread=1 00:35:01.306 invalidate=1 00:35:01.306 rw=randwrite 00:35:01.306 time_based=1 00:35:01.306 runtime=1 00:35:01.306 ioengine=libaio 00:35:01.306 direct=1 00:35:01.306 bs=4096 00:35:01.306 iodepth=1 00:35:01.306 norandommap=0 00:35:01.306 numjobs=1 00:35:01.306 00:35:01.306 verify_dump=1 00:35:01.306 verify_backlog=512 00:35:01.306 verify_state_save=0 00:35:01.306 do_verify=1 00:35:01.306 verify=crc32c-intel 00:35:01.306 [job0] 00:35:01.306 filename=/dev/nvme0n1 00:35:01.306 [job1] 00:35:01.306 filename=/dev/nvme0n2 00:35:01.306 [job2] 00:35:01.306 filename=/dev/nvme0n3 00:35:01.306 [job3] 00:35:01.306 filename=/dev/nvme0n4 00:35:01.306 Could not set queue depth (nvme0n1) 00:35:01.306 Could not set queue depth (nvme0n2) 00:35:01.306 Could not set queue depth (nvme0n3) 00:35:01.306 Could not set queue depth (nvme0n4) 00:35:01.564 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.564 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.564 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.564 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.564 fio-3.35 00:35:01.564 Starting 4 threads 00:35:02.935 00:35:02.935 job0: (groupid=0, jobs=1): err= 0: pid=382073: Tue Dec 10 06:00:20 2024 00:35:02.935 read: IOPS=1033, BW=4134KiB/s (4233kB/s)(4192KiB/1014msec) 00:35:02.935 slat (nsec): min=6826, max=27332, avg=8068.08, stdev=2087.13 00:35:02.935 clat (usec): min=184, max=41375, avg=702.21, stdev=4340.46 00:35:02.935 lat (usec): min=192, max=41385, avg=710.28, stdev=4341.84 00:35:02.935 clat percentiles (usec): 00:35:02.935 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:35:02.935 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 243], 00:35:02.935 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:35:02.935 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:02.935 | 99.99th=[41157] 00:35:02.935 write: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec); 0 zone resets 00:35:02.935 slat (nsec): min=9546, max=39410, avg=11136.09, stdev=2070.42 00:35:02.935 clat (usec): min=127, max=426, avg=159.10, stdev=25.68 00:35:02.935 lat (usec): min=137, max=439, avg=170.23, stdev=26.19 00:35:02.935 clat percentiles (usec): 00:35:02.935 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:35:02.935 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:35:02.935 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 215], 00:35:02.935 | 99.00th=[ 247], 99.50th=[ 277], 99.90th=[ 412], 99.95th=[ 429], 00:35:02.935 | 99.99th=[ 429] 00:35:02.935 bw ( KiB/s): min= 512, max=11776, per=44.40%, avg=6144.00, stdev=7964.85, samples=2 00:35:02.935 iops : min= 128, max= 2944, avg=1536.00, stdev=1991.21, samples=2 00:35:02.935 lat (usec) : 250=92.76%, 500=6.70%, 750=0.04% 00:35:02.935 lat (msec) : 10=0.04%, 50=0.46% 00:35:02.935 cpu : usr=1.97%, sys=4.15%, ctx=2584, majf=0, minf=1 00:35:02.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 issued rwts: total=1048,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.936 job1: (groupid=0, jobs=1): err= 0: pid=382074: Tue Dec 10 06:00:20 2024 00:35:02.936 read: IOPS=903, BW=3613KiB/s (3700kB/s)(3624KiB/1003msec) 00:35:02.936 slat (nsec): min=6432, max=24221, avg=7867.80, stdev=2227.13 00:35:02.936 clat (usec): min=175, max=41587, avg=887.74, stdev=5028.99 00:35:02.936 lat (usec): min=183, max=41595, avg=895.61, stdev=5029.60 00:35:02.936 clat percentiles (usec): 00:35:02.936 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 196], 00:35:02.936 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 243], 00:35:02.936 | 70.00th=[ 253], 80.00th=[ 314], 90.00th=[ 400], 95.00th=[ 408], 00:35:02.936 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:35:02.936 | 99.99th=[41681] 00:35:02.936 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:35:02.936 slat (nsec): min=9294, max=40515, avg=11448.67, stdev=2873.65 00:35:02.936 clat (usec): min=123, max=365, avg=169.73, stdev=22.28 00:35:02.936 lat (usec): min=146, max=401, avg=181.18, stdev=23.28 00:35:02.936 clat percentiles (usec): 00:35:02.936 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:35:02.936 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 172], 00:35:02.936 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 210], 00:35:02.936 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 243], 99.95th=[ 367], 00:35:02.936 | 99.99th=[ 367] 00:35:02.936 bw ( KiB/s): min= 8192, max= 8192, per=59.20%, avg=8192.00, stdev= 0.00, samples=1 00:35:02.936 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:02.936 lat (usec) : 250=83.89%, 500=15.13%, 750=0.16% 00:35:02.936 lat (msec) : 2=0.05%, 4=0.05%, 50=0.73% 00:35:02.936 cpu : usr=1.20%, sys=1.90%, ctx=1930, majf=0, minf=1 00:35:02.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 issued rwts: total=906,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.936 job2: (groupid=0, jobs=1): err= 0: pid=382075: Tue Dec 10 06:00:20 2024 00:35:02.936 read: IOPS=22, BW=91.1KiB/s (93.3kB/s)(92.0KiB/1010msec) 00:35:02.936 slat (nsec): min=8979, max=27601, avg=24409.91, stdev=4621.43 00:35:02.936 clat (usec): min=423, max=41337, avg=39219.36, stdev=8457.59 00:35:02.936 lat (usec): min=434, max=41346, avg=39243.77, stdev=8460.53 00:35:02.936 clat percentiles (usec): 00:35:02.936 | 1.00th=[ 424], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:02.936 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:02.936 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:02.936 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:02.936 | 99.99th=[41157] 00:35:02.936 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:35:02.936 slat (nsec): min=10750, max=55745, avg=12124.78, stdev=2353.50 00:35:02.936 clat (usec): min=165, max=342, avg=193.47, stdev=17.35 00:35:02.936 lat (usec): min=176, max=396, avg=205.59, stdev=18.34 00:35:02.936 clat percentiles (usec): 00:35:02.936 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:35:02.936 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:35:02.936 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:35:02.936 | 99.00th=[ 258], 99.50th=[ 338], 99.90th=[ 343], 99.95th=[ 343], 00:35:02.936 | 99.99th=[ 343] 00:35:02.936 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.936 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.936 lat (usec) : 250=94.58%, 500=1.31% 00:35:02.936 lat (msec) : 50=4.11% 00:35:02.936 cpu : usr=0.20%, sys=1.19%, ctx=536, majf=0, minf=1 00:35:02.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.936 job3: (groupid=0, jobs=1): err= 0: pid=382076: Tue Dec 10 06:00:20 2024 00:35:02.936 read: IOPS=262, BW=1050KiB/s (1075kB/s)(1088KiB/1036msec) 00:35:02.936 slat (nsec): min=8175, max=25819, avg=10391.09, stdev=3899.42 00:35:02.936 clat (usec): min=192, max=41487, avg=3400.78, stdev=10891.79 00:35:02.936 lat (usec): min=201, max=41498, avg=3411.17, stdev=10894.47 00:35:02.936 clat percentiles (usec): 00:35:02.936 | 1.00th=[ 196], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 241], 00:35:02.936 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:35:02.936 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 433], 95.00th=[41157], 00:35:02.936 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:35:02.936 | 99.99th=[41681] 00:35:02.936 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:35:02.936 slat (nsec): min=9219, max=40263, avg=10587.86, stdev=2066.64 00:35:02.936 clat (usec): min=143, max=324, avg=195.34, stdev=24.73 00:35:02.936 lat (usec): min=159, max=364, avg=205.93, stdev=24.86 00:35:02.936 clat percentiles (usec): 00:35:02.936 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:35:02.936 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:35:02.936 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 241], 95.00th=[ 243], 00:35:02.936 | 99.00th=[ 260], 99.50th=[ 293], 99.90th=[ 326], 99.95th=[ 326], 00:35:02.936 | 99.99th=[ 326] 00:35:02.936 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.936 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.936 lat (usec) : 250=85.59%, 500=11.73% 00:35:02.936 lat (msec) : 50=2.68% 00:35:02.936 cpu : usr=0.29%, sys=1.06%, ctx=784, majf=0, minf=1 00:35:02.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.936 issued rwts: total=272,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.936 00:35:02.936 Run status group 0 (all jobs): 00:35:02.936 READ: bw=8683KiB/s (8892kB/s), 91.1KiB/s-4134KiB/s (93.3kB/s-4233kB/s), io=8996KiB (9212kB), run=1003-1036msec 00:35:02.936 WRITE: bw=13.5MiB/s (14.2MB/s), 1977KiB/s-6059KiB/s (2024kB/s-6205kB/s), io=14.0MiB (14.7MB), run=1003-1036msec 00:35:02.936 00:35:02.936 Disk stats (read/write): 00:35:02.936 nvme0n1: ios=1093/1536, merge=0/0, ticks=524/233, in_queue=757, util=85.97% 00:35:02.936 nvme0n2: ios=941/1024, merge=0/0, ticks=775/172, in_queue=947, util=99.08% 00:35:02.936 nvme0n3: ios=77/512, merge=0/0, ticks=1227/90, in_queue=1317, util=97.48% 00:35:02.936 nvme0n4: ios=267/512, merge=0/0, ticks=720/96, in_queue=816, util=89.63% 00:35:02.936 06:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:02.936 [global] 00:35:02.936 thread=1 00:35:02.936 invalidate=1 00:35:02.936 rw=write 00:35:02.936 time_based=1 00:35:02.936 runtime=1 00:35:02.936 ioengine=libaio 00:35:02.936 direct=1 00:35:02.936 bs=4096 00:35:02.936 iodepth=128 00:35:02.936 norandommap=0 00:35:02.936 numjobs=1 00:35:02.936 00:35:02.936 verify_dump=1 00:35:02.936 verify_backlog=512 00:35:02.936 verify_state_save=0 00:35:02.936 do_verify=1 00:35:02.936 verify=crc32c-intel 00:35:02.936 [job0] 00:35:02.936 filename=/dev/nvme0n1 00:35:02.936 [job1] 00:35:02.936 filename=/dev/nvme0n2 00:35:02.936 [job2] 00:35:02.936 filename=/dev/nvme0n3 00:35:02.936 [job3] 00:35:02.936 filename=/dev/nvme0n4 00:35:02.936 Could not set queue depth (nvme0n1) 00:35:02.936 Could not set queue depth (nvme0n2) 00:35:02.936 Could not set queue depth (nvme0n3) 00:35:02.936 Could not set queue depth (nvme0n4) 00:35:03.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.193 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.193 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.193 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.193 fio-3.35 00:35:03.193 Starting 4 threads 00:35:04.564 00:35:04.564 job0: (groupid=0, jobs=1): err= 0: pid=382446: Tue Dec 10 06:00:22 2024 00:35:04.564 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:35:04.564 slat (nsec): min=1260, max=14688k, avg=107624.68, stdev=766768.72 00:35:04.564 clat (usec): min=5199, max=52657, avg=13879.24, stdev=8550.94 00:35:04.564 lat (usec): min=5202, max=52667, avg=13986.87, stdev=8625.25 00:35:04.564 clat percentiles (usec): 00:35:04.564 | 1.00th=[ 5735], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 8029], 00:35:04.564 | 30.00th=[ 8586], 40.00th=[10028], 50.00th=[12649], 60.00th=[13042], 00:35:04.564 | 70.00th=[13960], 80.00th=[15139], 90.00th=[27657], 95.00th=[33817], 00:35:04.564 | 99.00th=[48497], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:35:04.564 | 99.99th=[52691] 00:35:04.564 write: IOPS=3682, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec); 0 zone resets 00:35:04.564 slat (usec): min=2, max=31578, avg=160.63, stdev=1068.96 00:35:04.564 clat (usec): min=1794, max=117691, avg=20906.27, stdev=21891.53 00:35:04.564 lat (msec): min=5, max=117, avg=21.07, stdev=22.04 00:35:04.564 clat percentiles (msec): 00:35:04.564 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:35:04.564 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 16], 00:35:04.564 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 46], 95.00th=[ 63], 00:35:04.564 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 118], 99.95th=[ 118], 00:35:04.564 | 99.99th=[ 118] 00:35:04.564 bw ( KiB/s): min= 8240, max=20480, per=20.27%, avg=14360.00, stdev=8654.99, samples=2 00:35:04.564 iops : min= 2060, max= 5120, avg=3590.00, stdev=2163.75, samples=2 00:35:04.564 lat (msec) : 2=0.01%, 10=38.16%, 20=39.00%, 50=18.22%, 100=3.20% 00:35:04.564 lat (msec) : 250=1.41% 00:35:04.564 cpu : usr=3.39%, sys=3.98%, ctx=384, majf=0, minf=2 00:35:04.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:35:04.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.564 issued rwts: total=3584,3701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.564 job1: (groupid=0, jobs=1): err= 0: pid=382447: Tue Dec 10 06:00:22 2024 00:35:04.564 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:35:04.564 slat (nsec): min=1285, max=17387k, avg=92047.62, stdev=773145.51 00:35:04.564 clat (usec): min=881, max=45902, avg=11599.33, stdev=5098.18 00:35:04.564 lat (usec): min=985, max=45905, avg=11691.38, stdev=5162.19 00:35:04.564 clat percentiles (usec): 00:35:04.564 | 1.00th=[ 5800], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7963], 00:35:04.564 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10028], 00:35:04.564 | 70.00th=[12911], 80.00th=[14615], 90.00th=[18744], 95.00th=[22676], 00:35:04.564 | 99.00th=[28967], 99.50th=[31065], 99.90th=[39060], 99.95th=[45876], 00:35:04.564 | 99.99th=[45876] 00:35:04.564 write: IOPS=5878, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1008msec); 0 zone resets 00:35:04.564 slat (usec): min=2, max=12199, avg=66.18, stdev=486.24 00:35:04.564 clat (usec): min=876, max=63264, avg=10491.45, stdev=6976.75 00:35:04.564 lat (usec): min=1135, max=63268, avg=10557.64, stdev=7005.23 00:35:04.564 clat percentiles (usec): 00:35:04.564 | 1.00th=[ 2606], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6325], 00:35:04.564 | 30.00th=[ 7242], 40.00th=[ 8356], 50.00th=[ 9372], 60.00th=[ 9634], 00:35:04.564 | 70.00th=[ 9896], 80.00th=[12649], 90.00th=[15401], 95.00th=[21627], 00:35:04.564 | 99.00th=[44303], 99.50th=[45876], 99.90th=[56361], 99.95th=[56361], 00:35:04.564 | 99.99th=[63177] 00:35:04.564 bw ( KiB/s): min=20472, max=25920, per=32.74%, avg=23196.00, stdev=3852.32, samples=2 00:35:04.564 iops : min= 5118, max= 6480, avg=5799.00, stdev=963.08, samples=2 00:35:04.564 lat (usec) : 1000=0.02% 00:35:04.564 lat (msec) : 2=0.16%, 4=1.58%, 10=64.79%, 20=26.92%, 50=6.34% 00:35:04.564 lat (msec) : 100=0.19% 00:35:04.564 cpu : usr=5.26%, sys=5.06%, ctx=466, majf=0, minf=2 00:35:04.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:04.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.564 issued rwts: total=5632,5926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.564 job2: (groupid=0, jobs=1): err= 0: pid=382448: Tue Dec 10 06:00:22 2024 00:35:04.564 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:35:04.564 slat (nsec): min=1470, max=40061k, avg=150820.70, stdev=1297777.82 00:35:04.564 clat (usec): min=4485, max=76674, avg=19017.64, stdev=12125.45 00:35:04.564 lat (usec): min=4496, max=76679, avg=19168.46, stdev=12184.68 00:35:04.564 clat percentiles (usec): 00:35:04.564 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[11994], 20.00th=[12649], 00:35:04.564 | 30.00th=[13042], 40.00th=[13435], 50.00th=[14615], 60.00th=[16319], 00:35:04.564 | 70.00th=[17957], 80.00th=[22152], 90.00th=[30540], 95.00th=[42730], 00:35:04.564 | 99.00th=[71828], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:35:04.564 | 99.99th=[77071] 00:35:04.564 write: IOPS=2938, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1012msec); 0 zone resets 00:35:04.564 slat (usec): min=2, max=40624, avg=200.57, stdev=1572.59 00:35:04.564 clat (msec): min=3, max=122, avg=24.60, stdev=17.50 00:35:04.564 lat (msec): min=3, max=122, avg=24.80, stdev=17.63 00:35:04.564 clat percentiles (msec): 00:35:04.564 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 11], 00:35:04.564 | 30.00th=[ 12], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 22], 00:35:04.564 | 70.00th=[ 31], 80.00th=[ 41], 90.00th=[ 42], 95.00th=[ 51], 00:35:04.564 | 99.00th=[ 123], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:35:04.564 | 99.99th=[ 123] 00:35:04.564 bw ( KiB/s): min= 8936, max=13832, per=16.07%, avg=11384.00, stdev=3461.99, samples=2 00:35:04.565 iops : min= 2234, max= 3458, avg=2846.00, stdev=865.50, samples=2 00:35:04.565 lat (msec) : 4=0.22%, 10=7.66%, 20=53.96%, 50=33.07%, 100=4.54% 00:35:04.565 lat (msec) : 250=0.56% 00:35:04.565 cpu : usr=2.57%, sys=3.66%, ctx=261, majf=0, minf=1 00:35:04.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:35:04.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.565 issued rwts: total=2560,2974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.565 job3: (groupid=0, jobs=1): err= 0: pid=382449: Tue Dec 10 06:00:22 2024 00:35:04.565 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:35:04.565 slat (nsec): min=1558, max=19014k, avg=83178.79, stdev=736169.62 00:35:04.565 clat (usec): min=1184, max=54889, avg=11569.69, stdev=5604.76 00:35:04.565 lat (usec): min=1208, max=54913, avg=11652.87, stdev=5660.65 00:35:04.565 clat percentiles (usec): 00:35:04.565 | 1.00th=[ 1467], 5.00th=[ 5342], 10.00th=[ 7242], 20.00th=[ 8717], 00:35:04.565 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:35:04.565 | 70.00th=[11469], 80.00th=[12780], 90.00th=[17433], 95.00th=[20579], 00:35:04.565 | 99.00th=[38536], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:35:04.565 | 99.99th=[54789] 00:35:04.565 write: IOPS=5315, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1002msec); 0 zone resets 00:35:04.565 slat (usec): min=2, max=40675, avg=87.63, stdev=822.56 00:35:04.565 clat (usec): min=222, max=70199, avg=12676.19, stdev=11128.09 00:35:04.565 lat (usec): min=236, max=70203, avg=12763.83, stdev=11171.97 00:35:04.565 clat percentiles (usec): 00:35:04.565 | 1.00th=[ 1680], 5.00th=[ 3949], 10.00th=[ 5866], 20.00th=[ 7898], 00:35:04.565 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[11076], 00:35:04.565 | 70.00th=[11338], 80.00th=[12256], 90.00th=[16188], 95.00th=[43254], 00:35:04.565 | 99.00th=[65274], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:35:04.565 | 99.99th=[69731] 00:35:04.565 bw ( KiB/s): min=20480, max=21112, per=29.35%, avg=20796.00, stdev=446.89, samples=2 00:35:04.565 iops : min= 5120, max= 5278, avg=5199.00, stdev=111.72, samples=2 00:35:04.565 lat (usec) : 250=0.01%, 500=0.02%, 750=0.09%, 1000=0.14% 00:35:04.565 lat (msec) : 2=1.32%, 4=2.31%, 10=35.18%, 20=54.05%, 50=4.60% 00:35:04.565 lat (msec) : 100=2.28% 00:35:04.565 cpu : usr=4.90%, sys=5.89%, ctx=380, majf=0, minf=1 00:35:04.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:04.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.565 issued rwts: total=5120,5326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.565 00:35:04.565 Run status group 0 (all jobs): 00:35:04.565 READ: bw=65.2MiB/s (68.4MB/s), 9.88MiB/s-21.8MiB/s (10.4MB/s-22.9MB/s), io=66.0MiB (69.2MB), run=1002-1012msec 00:35:04.565 WRITE: bw=69.2MiB/s (72.6MB/s), 11.5MiB/s-23.0MiB/s (12.0MB/s-24.1MB/s), io=70.0MiB (73.4MB), run=1002-1012msec 00:35:04.565 00:35:04.565 Disk stats (read/write): 00:35:04.565 nvme0n1: ios=2924/3072, merge=0/0, ticks=19451/29328, in_queue=48779, util=81.06% 00:35:04.565 nvme0n2: ios=4096/4607, merge=0/0, ticks=47680/48327, in_queue=96007, util=81.87% 00:35:04.565 nvme0n3: ios=2084/2183, merge=0/0, ticks=34814/36285, in_queue=71099, util=99.23% 00:35:04.565 nvme0n4: ios=3873/4096, merge=0/0, ticks=40197/44188, in_queue=84385, util=97.54% 00:35:04.565 06:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:04.565 [global] 00:35:04.565 thread=1 00:35:04.565 invalidate=1 00:35:04.565 rw=randwrite 00:35:04.565 time_based=1 00:35:04.565 runtime=1 00:35:04.565 ioengine=libaio 00:35:04.565 direct=1 00:35:04.565 bs=4096 00:35:04.565 iodepth=128 00:35:04.565 norandommap=0 00:35:04.565 numjobs=1 00:35:04.565 00:35:04.565 verify_dump=1 00:35:04.565 verify_backlog=512 00:35:04.565 verify_state_save=0 00:35:04.565 do_verify=1 00:35:04.565 verify=crc32c-intel 00:35:04.565 [job0] 00:35:04.565 filename=/dev/nvme0n1 00:35:04.565 [job1] 00:35:04.565 filename=/dev/nvme0n2 00:35:04.565 [job2] 00:35:04.565 filename=/dev/nvme0n3 00:35:04.565 [job3] 00:35:04.565 filename=/dev/nvme0n4 00:35:04.565 Could not set queue depth (nvme0n1) 00:35:04.565 Could not set queue depth (nvme0n2) 00:35:04.565 Could not set queue depth (nvme0n3) 00:35:04.565 Could not set queue depth (nvme0n4) 00:35:04.822 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.822 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.822 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.822 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.822 fio-3.35 00:35:04.822 Starting 4 threads 00:35:06.210 00:35:06.210 job0: (groupid=0, jobs=1): err= 0: pid=382820: Tue Dec 10 06:00:23 2024 00:35:06.210 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:35:06.210 slat (nsec): min=1009, max=12538k, avg=96917.29, stdev=711824.29 00:35:06.210 clat (usec): min=3243, max=48469, avg=12462.30, stdev=5869.02 00:35:06.210 lat (usec): min=3248, max=48473, avg=12559.22, stdev=5926.54 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 3523], 5.00th=[ 6063], 10.00th=[ 7635], 20.00th=[ 8717], 00:35:06.210 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11207], 60.00th=[12518], 00:35:06.210 | 70.00th=[13698], 80.00th=[14615], 90.00th=[18220], 95.00th=[21890], 00:35:06.210 | 99.00th=[42206], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:35:06.210 | 99.99th=[48497] 00:35:06.210 write: IOPS=4371, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1008msec); 0 zone resets 00:35:06.210 slat (nsec): min=1869, max=12424k, avg=126890.47, stdev=719464.34 00:35:06.210 clat (usec): min=1044, max=81637, avg=17378.90, stdev=13448.94 00:35:06.210 lat (usec): min=1066, max=81643, avg=17505.79, stdev=13522.03 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 3884], 5.00th=[ 6325], 10.00th=[ 7701], 20.00th=[ 9110], 00:35:06.210 | 30.00th=[ 9765], 40.00th=[11469], 50.00th=[13042], 60.00th=[14484], 00:35:06.210 | 70.00th=[18744], 80.00th=[20317], 90.00th=[33162], 95.00th=[47973], 00:35:06.210 | 99.00th=[73925], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:35:06.210 | 99.99th=[81265] 00:35:06.210 bw ( KiB/s): min=15376, max=18856, per=23.92%, avg=17116.00, stdev=2460.73, samples=2 00:35:06.210 iops : min= 3844, max= 4714, avg=4279.00, stdev=615.18, samples=2 00:35:06.210 lat (msec) : 2=0.07%, 4=2.14%, 10=32.83%, 20=51.07%, 50=11.49% 00:35:06.210 lat (msec) : 100=2.40% 00:35:06.210 cpu : usr=2.48%, sys=3.57%, ctx=463, majf=0, minf=1 00:35:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.210 issued rwts: total=4096,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.210 job1: (groupid=0, jobs=1): err= 0: pid=382821: Tue Dec 10 06:00:23 2024 00:35:06.210 read: IOPS=3051, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:35:06.210 slat (nsec): min=1538, max=18973k, avg=131054.59, stdev=895676.45 00:35:06.210 clat (usec): min=1530, max=42117, avg=15232.12, stdev=7224.79 00:35:06.210 lat (usec): min=6456, max=42125, avg=15363.18, stdev=7276.47 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 6783], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10552], 00:35:06.210 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[12780], 00:35:06.210 | 70.00th=[14746], 80.00th=[19006], 90.00th=[27395], 95.00th=[32637], 00:35:06.210 | 99.00th=[38536], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:35:06.210 | 99.99th=[42206] 00:35:06.210 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:35:06.210 slat (usec): min=2, max=21364, avg=189.45, stdev=1038.72 00:35:06.210 clat (usec): min=5157, max=94636, avg=26159.58, stdev=17260.81 00:35:06.210 lat (usec): min=5163, max=94648, avg=26349.02, stdev=17366.39 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[11338], 00:35:06.210 | 30.00th=[14877], 40.00th=[18744], 50.00th=[19530], 60.00th=[25822], 00:35:06.210 | 70.00th=[31589], 80.00th=[36439], 90.00th=[53740], 95.00th=[61080], 00:35:06.210 | 99.00th=[87557], 99.50th=[91751], 99.90th=[94897], 99.95th=[94897], 00:35:06.210 | 99.99th=[94897] 00:35:06.210 bw ( KiB/s): min=11080, max=13496, per=17.18%, avg=12288.00, stdev=1708.37, samples=2 00:35:06.210 iops : min= 2770, max= 3374, avg=3072.00, stdev=427.09, samples=2 00:35:06.210 lat (msec) : 2=0.02%, 10=12.78%, 20=55.28%, 50=26.52%, 100=5.41% 00:35:06.210 cpu : usr=2.29%, sys=3.59%, ctx=369, majf=0, minf=1 00:35:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:35:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.210 issued rwts: total=3064,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.210 job2: (groupid=0, jobs=1): err= 0: pid=382822: Tue Dec 10 06:00:23 2024 00:35:06.210 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:35:06.210 slat (nsec): min=1254, max=11034k, avg=89112.49, stdev=585705.72 00:35:06.210 clat (usec): min=4927, max=28492, avg=11665.12, stdev=2820.40 00:35:06.210 lat (usec): min=4931, max=28499, avg=11754.23, stdev=2844.88 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 5538], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[ 9896], 00:35:06.210 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:35:06.210 | 70.00th=[12256], 80.00th=[13566], 90.00th=[15533], 95.00th=[17171], 00:35:06.210 | 99.00th=[21103], 99.50th=[21103], 99.90th=[24773], 99.95th=[24773], 00:35:06.210 | 99.99th=[28443] 00:35:06.210 write: IOPS=5673, BW=22.2MiB/s (23.2MB/s)(22.3MiB/1005msec); 0 zone resets 00:35:06.210 slat (nsec): min=1979, max=9892.5k, avg=80196.43, stdev=481109.78 00:35:06.210 clat (usec): min=526, max=22008, avg=10760.42, stdev=2165.37 00:35:06.210 lat (usec): min=4528, max=22024, avg=10840.62, stdev=2177.79 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[ 8979], 00:35:06.210 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11076], 60.00th=[11338], 00:35:06.210 | 70.00th=[11600], 80.00th=[11994], 90.00th=[13566], 95.00th=[15270], 00:35:06.210 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17171], 99.95th=[17957], 00:35:06.210 | 99.99th=[21890] 00:35:06.210 bw ( KiB/s): min=22064, max=22992, per=31.49%, avg=22528.00, stdev=656.20, samples=2 00:35:06.210 iops : min= 5516, max= 5748, avg=5632.00, stdev=164.05, samples=2 00:35:06.210 lat (usec) : 750=0.01% 00:35:06.210 lat (msec) : 10=29.18%, 20=69.83%, 50=0.99% 00:35:06.210 cpu : usr=3.69%, sys=8.07%, ctx=474, majf=0, minf=1 00:35:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.210 issued rwts: total=5632,5702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.210 job3: (groupid=0, jobs=1): err= 0: pid=382823: Tue Dec 10 06:00:23 2024 00:35:06.210 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:35:06.210 slat (nsec): min=1158, max=44884k, avg=109117.43, stdev=922025.82 00:35:06.210 clat (usec): min=5933, max=72032, avg=14015.85, stdev=8798.80 00:35:06.210 lat (usec): min=5944, max=72038, avg=14124.97, stdev=8842.21 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10290], 00:35:06.210 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11863], 60.00th=[12387], 00:35:06.210 | 70.00th=[13435], 80.00th=[14353], 90.00th=[17695], 95.00th=[28705], 00:35:06.210 | 99.00th=[55837], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:35:06.210 | 99.99th=[71828] 00:35:06.210 write: IOPS=4829, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1004msec); 0 zone resets 00:35:06.210 slat (nsec): min=1816, max=8374.3k, avg=90412.91, stdev=513532.81 00:35:06.210 clat (usec): min=299, max=52040, avg=12889.78, stdev=7669.34 00:35:06.210 lat (usec): min=309, max=52061, avg=12980.20, stdev=7722.64 00:35:06.210 clat percentiles (usec): 00:35:06.210 | 1.00th=[ 2180], 5.00th=[ 6128], 10.00th=[ 7439], 20.00th=[ 9110], 00:35:06.210 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:35:06.210 | 70.00th=[12256], 80.00th=[13304], 90.00th=[16909], 95.00th=[34866], 00:35:06.210 | 99.00th=[44827], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:35:06.210 | 99.99th=[52167] 00:35:06.210 bw ( KiB/s): min=17296, max=20480, per=26.40%, avg=18888.00, stdev=2251.43, samples=2 00:35:06.210 iops : min= 4324, max= 5120, avg=4722.00, stdev=562.86, samples=2 00:35:06.210 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.30% 00:35:06.210 lat (msec) : 2=0.02%, 4=0.58%, 10=17.14%, 20=74.88%, 50=5.59% 00:35:06.210 lat (msec) : 100=1.42% 00:35:06.210 cpu : usr=2.89%, sys=6.18%, ctx=413, majf=0, minf=1 00:35:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.211 issued rwts: total=4608,4849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.211 00:35:06.211 Run status group 0 (all jobs): 00:35:06.211 READ: bw=67.4MiB/s (70.7MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1004-1008msec 00:35:06.211 WRITE: bw=69.9MiB/s (73.3MB/s), 12.0MiB/s-22.2MiB/s (12.5MB/s-23.2MB/s), io=70.4MiB (73.8MB), run=1004-1008msec 00:35:06.211 00:35:06.211 Disk stats (read/write): 00:35:06.211 nvme0n1: ios=3247/3584, merge=0/0, ticks=40285/62395, in_queue=102680, util=86.07% 00:35:06.211 nvme0n2: ios=2421/2560, merge=0/0, ticks=19841/31907, in_queue=51748, util=90.34% 00:35:06.211 nvme0n3: ios=4729/5120, merge=0/0, ticks=27350/28116, in_queue=55466, util=96.03% 00:35:06.211 nvme0n4: ios=3726/4096, merge=0/0, ticks=32279/36394, in_queue=68673, util=99.89% 00:35:06.211 06:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:06.211 06:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=383049 00:35:06.211 06:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:06.211 06:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:06.211 [global] 00:35:06.211 thread=1 00:35:06.211 invalidate=1 00:35:06.211 rw=read 00:35:06.211 time_based=1 00:35:06.211 runtime=10 00:35:06.211 ioengine=libaio 00:35:06.211 direct=1 00:35:06.211 bs=4096 00:35:06.211 iodepth=1 00:35:06.211 norandommap=1 00:35:06.211 numjobs=1 00:35:06.211 00:35:06.211 [job0] 00:35:06.211 filename=/dev/nvme0n1 00:35:06.211 [job1] 00:35:06.211 filename=/dev/nvme0n2 00:35:06.211 [job2] 00:35:06.211 filename=/dev/nvme0n3 00:35:06.211 [job3] 00:35:06.211 filename=/dev/nvme0n4 00:35:06.211 Could not set queue depth (nvme0n1) 00:35:06.211 Could not set queue depth (nvme0n2) 00:35:06.211 Could not set queue depth (nvme0n3) 00:35:06.211 Could not set queue depth (nvme0n4) 00:35:06.468 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.468 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.468 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.468 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.468 fio-3.35 00:35:06.468 Starting 4 threads 00:35:08.990 06:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:09.247 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:09.247 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:35:09.247 fio: pid=383190, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.505 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=21725184, buflen=4096 00:35:09.505 fio: pid=383189, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.505 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:09.505 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:09.505 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:09.505 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:09.763 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3174400, buflen=4096 00:35:09.763 fio: pid=383187, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.763 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=335872, buflen=4096 00:35:09.763 fio: pid=383188, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.763 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:09.763 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:10.023 00:35:10.023 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=383187: Tue Dec 10 06:00:27 2024 00:35:10.023 read: IOPS=247, BW=988KiB/s (1012kB/s)(3100KiB/3138msec) 00:35:10.023 slat (usec): min=6, max=16847, avg=40.19, stdev=650.13 00:35:10.023 clat (usec): min=192, max=42282, avg=3977.76, stdev=11760.19 00:35:10.023 lat (usec): min=199, max=58030, avg=4017.97, stdev=11874.70 00:35:10.023 clat percentiles (usec): 00:35:10.023 | 1.00th=[ 200], 5.00th=[ 223], 10.00th=[ 235], 20.00th=[ 241], 00:35:10.023 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:35:10.023 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[41157], 00:35:10.023 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:10.023 | 99.99th=[42206] 00:35:10.023 bw ( KiB/s): min= 208, max= 4976, per=13.75%, avg=1025.17, stdev=1935.77, samples=6 00:35:10.023 iops : min= 52, max= 1244, avg=256.17, stdev=484.01, samples=6 00:35:10.023 lat (usec) : 250=63.92%, 500=26.80% 00:35:10.023 lat (msec) : 50=9.15% 00:35:10.023 cpu : usr=0.06%, sys=0.48%, ctx=779, majf=0, minf=1 00:35:10.023 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.023 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.023 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.023 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.023 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=383188: Tue Dec 10 06:00:27 2024 00:35:10.024 read: IOPS=24, BW=98.2KiB/s (101kB/s)(328KiB/3341msec) 00:35:10.024 slat (usec): min=10, max=14790, avg=422.71, stdev=2204.76 00:35:10.024 clat (usec): min=305, max=42030, avg=40041.75, stdev=6316.80 00:35:10.024 lat (usec): min=328, max=55794, avg=40469.30, stdev=6759.19 00:35:10.024 clat percentiles (usec): 00:35:10.024 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:10.024 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:10.024 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:10.024 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:10.024 | 99.99th=[42206] 00:35:10.024 bw ( KiB/s): min= 94, max= 112, per=1.33%, avg=99.67, stdev= 6.98, samples=6 00:35:10.024 iops : min= 23, max= 28, avg=24.83, stdev= 1.83, samples=6 00:35:10.024 lat (usec) : 500=2.41% 00:35:10.024 lat (msec) : 50=96.39% 00:35:10.024 cpu : usr=0.12%, sys=0.00%, ctx=86, majf=0, minf=2 00:35:10.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.024 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.024 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.024 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=383189: Tue Dec 10 06:00:27 2024 00:35:10.024 read: IOPS=1831, BW=7323KiB/s (7499kB/s)(20.7MiB/2897msec) 00:35:10.024 slat (nsec): min=7557, max=52454, avg=9788.41, stdev=2133.40 00:35:10.024 clat (usec): min=177, max=41160, avg=529.86, stdev=3571.06 00:35:10.024 lat (usec): min=196, max=41185, avg=539.65, stdev=3572.23 00:35:10.024 clat percentiles (usec): 00:35:10.024 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 194], 20.00th=[ 196], 00:35:10.024 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:35:10.024 | 70.00th=[ 210], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 262], 00:35:10.024 | 99.00th=[ 445], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:10.024 | 99.99th=[41157] 00:35:10.024 bw ( KiB/s): min= 96, max=18776, per=76.36%, avg=5694.40, stdev=8324.30, samples=5 00:35:10.024 iops : min= 24, max= 4694, avg=1423.60, stdev=2081.07, samples=5 00:35:10.024 lat (usec) : 250=90.91%, 500=8.28% 00:35:10.024 lat (msec) : 10=0.02%, 50=0.77% 00:35:10.024 cpu : usr=1.24%, sys=3.14%, ctx=5307, majf=0, minf=2 00:35:10.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.024 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.024 issued rwts: total=5305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.024 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=383190: Tue Dec 10 06:00:27 2024 00:35:10.024 read: IOPS=24, BW=98.4KiB/s (101kB/s)(268KiB/2723msec) 00:35:10.024 slat (nsec): min=14206, max=67232, avg=23237.81, stdev=5605.19 00:35:10.024 clat (usec): min=467, max=41980, avg=40389.33, stdev=4953.25 00:35:10.024 lat (usec): min=535, max=42005, avg=40412.55, stdev=4947.79 00:35:10.024 clat percentiles (usec): 00:35:10.024 | 1.00th=[ 469], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:10.024 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:10.024 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:10.024 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:10.024 | 99.99th=[42206] 00:35:10.024 bw ( KiB/s): min= 96, max= 104, per=1.33%, avg=99.20, stdev= 4.38, samples=5 00:35:10.024 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:35:10.024 lat (usec) : 500=1.47% 00:35:10.024 lat (msec) : 50=97.06% 00:35:10.024 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:35:10.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.024 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.024 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.024 00:35:10.024 Run status group 0 (all jobs): 00:35:10.024 READ: bw=7456KiB/s (7635kB/s), 98.2KiB/s-7323KiB/s (101kB/s-7499kB/s), io=24.3MiB (25.5MB), run=2723-3341msec 00:35:10.024 00:35:10.024 Disk stats (read/write): 00:35:10.024 nvme0n1: ios=809/0, merge=0/0, ticks=3990/0, in_queue=3990, util=98.43% 00:35:10.024 nvme0n2: ios=77/0, merge=0/0, ticks=3080/0, in_queue=3080, util=95.36% 00:35:10.024 nvme0n3: ios=5210/0, merge=0/0, ticks=3312/0, in_queue=3312, util=99.63% 00:35:10.024 nvme0n4: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.44% 00:35:10.024 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.024 06:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:10.321 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.321 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:10.599 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.599 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:10.599 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.599 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 383049 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:10.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:10.867 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:11.124 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:11.124 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:11.124 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:11.124 nvmf hotplug test: fio failed as expected 00:35:11.124 06:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:11.124 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.125 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.125 rmmod nvme_tcp 00:35:11.125 rmmod nvme_fabrics 00:35:11.383 rmmod nvme_keyring 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 380477 ']' 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 380477 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 380477 ']' 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 380477 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380477 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380477' 00:35:11.383 killing process with pid 380477 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 380477 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 380477 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.383 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.642 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.642 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.642 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.642 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.642 06:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.547 00:35:13.547 real 0m26.847s 00:35:13.547 user 1m32.978s 00:35:13.547 sys 0m11.126s 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.547 ************************************ 00:35:13.547 END TEST nvmf_fio_target 00:35:13.547 ************************************ 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:13.547 ************************************ 00:35:13.547 START TEST nvmf_bdevio 00:35:13.547 ************************************ 00:35:13.547 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:13.807 * Looking for test storage... 00:35:13.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.807 --rc genhtml_branch_coverage=1 00:35:13.807 --rc genhtml_function_coverage=1 00:35:13.807 --rc genhtml_legend=1 00:35:13.807 --rc geninfo_all_blocks=1 00:35:13.807 --rc geninfo_unexecuted_blocks=1 00:35:13.807 00:35:13.807 ' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.807 --rc genhtml_branch_coverage=1 00:35:13.807 --rc genhtml_function_coverage=1 00:35:13.807 --rc genhtml_legend=1 00:35:13.807 --rc geninfo_all_blocks=1 00:35:13.807 --rc geninfo_unexecuted_blocks=1 00:35:13.807 00:35:13.807 ' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.807 --rc genhtml_branch_coverage=1 00:35:13.807 --rc genhtml_function_coverage=1 00:35:13.807 --rc genhtml_legend=1 00:35:13.807 --rc geninfo_all_blocks=1 00:35:13.807 --rc geninfo_unexecuted_blocks=1 00:35:13.807 00:35:13.807 ' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.807 --rc genhtml_branch_coverage=1 00:35:13.807 --rc genhtml_function_coverage=1 00:35:13.807 --rc genhtml_legend=1 00:35:13.807 --rc geninfo_all_blocks=1 00:35:13.807 --rc geninfo_unexecuted_blocks=1 00:35:13.807 00:35:13.807 ' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.807 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.808 06:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.378 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:20.379 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:20.379 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:20.379 Found net devices under 0000:af:00.0: cvl_0_0 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:20.379 Found net devices under 0000:af:00.1: cvl_0_1 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:20.379 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:20.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:20.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:35:20.639 00:35:20.639 --- 10.0.0.2 ping statistics --- 00:35:20.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.639 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:20.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:20.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:35:20.639 00:35:20.639 --- 10.0.0.1 ping statistics --- 00:35:20.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.639 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=387900 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 387900 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 387900 ']' 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.639 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.639 [2024-12-10 06:00:38.494221] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:20.639 [2024-12-10 06:00:38.495152] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:35:20.639 [2024-12-10 06:00:38.495186] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:20.639 [2024-12-10 06:00:38.577820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:20.898 [2024-12-10 06:00:38.619804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.898 [2024-12-10 06:00:38.619837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.898 [2024-12-10 06:00:38.619844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.898 [2024-12-10 06:00:38.619850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.898 [2024-12-10 06:00:38.619855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.898 [2024-12-10 06:00:38.621430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:20.898 [2024-12-10 06:00:38.621538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:20.898 [2024-12-10 06:00:38.621645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:20.898 [2024-12-10 06:00:38.621647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:20.898 [2024-12-10 06:00:38.689414] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:20.898 [2024-12-10 06:00:38.690290] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:20.898 [2024-12-10 06:00:38.690470] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:20.898 [2024-12-10 06:00:38.690869] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:20.898 [2024-12-10 06:00:38.690913] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.899 [2024-12-10 06:00:38.758333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.899 Malloc0 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:20.899 [2024-12-10 06:00:38.838486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.899 { 00:35:20.899 "params": { 00:35:20.899 "name": "Nvme$subsystem", 00:35:20.899 "trtype": "$TEST_TRANSPORT", 00:35:20.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.899 "adrfam": "ipv4", 00:35:20.899 "trsvcid": "$NVMF_PORT", 00:35:20.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.899 "hdgst": ${hdgst:-false}, 00:35:20.899 "ddgst": ${ddgst:-false} 00:35:20.899 }, 00:35:20.899 "method": "bdev_nvme_attach_controller" 00:35:20.899 } 00:35:20.899 EOF 00:35:20.899 )") 00:35:20.899 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:21.157 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:21.157 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:21.157 06:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:21.157 "params": { 00:35:21.157 "name": "Nvme1", 00:35:21.157 "trtype": "tcp", 00:35:21.157 "traddr": "10.0.0.2", 00:35:21.157 "adrfam": "ipv4", 00:35:21.157 "trsvcid": "4420", 00:35:21.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:21.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:21.157 "hdgst": false, 00:35:21.157 "ddgst": false 00:35:21.157 }, 00:35:21.157 "method": "bdev_nvme_attach_controller" 00:35:21.157 }' 00:35:21.157 [2024-12-10 06:00:38.889058] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:35:21.157 [2024-12-10 06:00:38.889105] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387923 ] 00:35:21.157 [2024-12-10 06:00:38.972600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:21.157 [2024-12-10 06:00:39.015252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.157 [2024-12-10 06:00:39.015306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.157 [2024-12-10 06:00:39.015306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:21.415 I/O targets: 00:35:21.415 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:21.415 00:35:21.415 00:35:21.415 CUnit - A unit testing framework for C - Version 2.1-3 00:35:21.415 http://cunit.sourceforge.net/ 00:35:21.415 00:35:21.415 00:35:21.415 Suite: bdevio tests on: Nvme1n1 00:35:21.415 Test: blockdev write read block ...passed 00:35:21.672 Test: blockdev write zeroes read block ...passed 00:35:21.673 Test: blockdev write zeroes read no split ...passed 00:35:21.673 Test: blockdev write zeroes read split ...passed 00:35:21.673 Test: blockdev write zeroes read split partial ...passed 00:35:21.673 Test: blockdev reset ...[2024-12-10 06:00:39.402140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:21.673 [2024-12-10 06:00:39.402200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189d0 (9): Bad file descriptor 00:35:21.673 [2024-12-10 06:00:39.446089] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:21.673 passed 00:35:21.673 Test: blockdev write read 8 blocks ...passed 00:35:21.673 Test: blockdev write read size > 128k ...passed 00:35:21.673 Test: blockdev write read invalid size ...passed 00:35:21.673 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:21.673 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:21.673 Test: blockdev write read max offset ...passed 00:35:21.673 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:21.930 Test: blockdev writev readv 8 blocks ...passed 00:35:21.930 Test: blockdev writev readv 30 x 1block ...passed 00:35:21.930 Test: blockdev writev readv block ...passed 00:35:21.930 Test: blockdev writev readv size > 128k ...passed 00:35:21.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:21.930 Test: blockdev comparev and writev ...[2024-12-10 06:00:39.698430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.698455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.930 [2024-12-10 06:00:39.698469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.698477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.930 [2024-12-10 06:00:39.698772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.698782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:21.930 [2024-12-10 06:00:39.698793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.698800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:21.930 [2024-12-10 06:00:39.699086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.699095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:21.930 [2024-12-10 06:00:39.699107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.699114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:21.930 [2024-12-10 06:00:39.699408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.930 [2024-12-10 06:00:39.699419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:21.931 [2024-12-10 06:00:39.699433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:21.931 [2024-12-10 06:00:39.699441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:21.931 passed 00:35:21.931 Test: blockdev nvme passthru rw ...passed 00:35:21.931 Test: blockdev nvme passthru vendor specific ...[2024-12-10 06:00:39.781567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.931 [2024-12-10 06:00:39.781581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:21.931 [2024-12-10 06:00:39.781688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.931 [2024-12-10 06:00:39.781697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:21.931 [2024-12-10 06:00:39.781805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.931 [2024-12-10 06:00:39.781814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:21.931 [2024-12-10 06:00:39.781917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.931 [2024-12-10 06:00:39.781926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:21.931 passed 00:35:21.931 Test: blockdev nvme admin passthru ...passed 00:35:21.931 Test: blockdev copy ...passed 00:35:21.931 00:35:21.931 Run Summary: Type Total Ran Passed Failed Inactive 00:35:21.931 suites 1 1 n/a 0 0 00:35:21.931 tests 23 23 23 0 0 00:35:21.931 asserts 152 152 152 0 n/a 00:35:21.931 00:35:21.931 Elapsed time = 1.089 seconds 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.189 06:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.189 rmmod nvme_tcp 00:35:22.189 rmmod nvme_fabrics 00:35:22.189 rmmod nvme_keyring 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 387900 ']' 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 387900 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 387900 ']' 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 387900 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387900 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387900' 00:35:22.189 killing process with pid 387900 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 387900 00:35:22.189 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 387900 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.448 06:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.983 06:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.983 00:35:24.983 real 0m10.888s 00:35:24.983 user 0m9.435s 00:35:24.983 sys 0m5.821s 00:35:24.983 06:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.983 06:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:24.983 ************************************ 00:35:24.983 END TEST nvmf_bdevio 00:35:24.983 ************************************ 00:35:24.983 06:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:24.983 00:35:24.983 real 4m46.769s 00:35:24.983 user 9m13.688s 00:35:24.983 sys 1m58.167s 00:35:24.983 06:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.983 06:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:24.983 ************************************ 00:35:24.983 END TEST nvmf_target_core_interrupt_mode 00:35:24.983 ************************************ 00:35:24.983 06:00:42 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:24.983 06:00:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:24.983 06:00:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.983 06:00:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:24.983 ************************************ 00:35:24.983 START TEST nvmf_interrupt 00:35:24.983 ************************************ 00:35:24.983 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:24.983 * Looking for test storage... 00:35:24.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:24.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.984 --rc genhtml_branch_coverage=1 00:35:24.984 --rc genhtml_function_coverage=1 00:35:24.984 --rc genhtml_legend=1 00:35:24.984 --rc geninfo_all_blocks=1 00:35:24.984 --rc geninfo_unexecuted_blocks=1 00:35:24.984 00:35:24.984 ' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:24.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.984 --rc genhtml_branch_coverage=1 00:35:24.984 --rc genhtml_function_coverage=1 00:35:24.984 --rc genhtml_legend=1 00:35:24.984 --rc geninfo_all_blocks=1 00:35:24.984 --rc geninfo_unexecuted_blocks=1 00:35:24.984 00:35:24.984 ' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:24.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.984 --rc genhtml_branch_coverage=1 00:35:24.984 --rc genhtml_function_coverage=1 00:35:24.984 --rc genhtml_legend=1 00:35:24.984 --rc geninfo_all_blocks=1 00:35:24.984 --rc geninfo_unexecuted_blocks=1 00:35:24.984 00:35:24.984 ' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:24.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.984 --rc genhtml_branch_coverage=1 00:35:24.984 --rc genhtml_function_coverage=1 00:35:24.984 --rc genhtml_legend=1 00:35:24.984 --rc geninfo_all_blocks=1 00:35:24.984 --rc geninfo_unexecuted_blocks=1 00:35:24.984 00:35:24.984 ' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.984 06:00:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:31.549 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:31.549 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:31.549 Found net devices under 0000:af:00.0: cvl_0_0 00:35:31.549 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:31.550 Found net devices under 0000:af:00.1: cvl_0_1 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:31.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:35:31.550 00:35:31.550 --- 10.0.0.2 ping statistics --- 00:35:31.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.550 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:35:31.550 00:35:31.550 --- 10.0.0.1 ping statistics --- 00:35:31.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.550 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=392064 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 392064 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 392064 ']' 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.550 06:00:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:31.550 [2024-12-10 06:00:49.440655] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:31.550 [2024-12-10 06:00:49.441581] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:35:31.550 [2024-12-10 06:00:49.441612] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.809 [2024-12-10 06:00:49.525541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:31.809 [2024-12-10 06:00:49.564756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.809 [2024-12-10 06:00:49.564795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.809 [2024-12-10 06:00:49.564802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.809 [2024-12-10 06:00:49.564808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.809 [2024-12-10 06:00:49.564814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.809 [2024-12-10 06:00:49.566024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.809 [2024-12-10 06:00:49.566026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.809 [2024-12-10 06:00:49.634355] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:31.809 [2024-12-10 06:00:49.634872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:31.809 [2024-12-10 06:00:49.635122] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:32.377 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:32.635 5000+0 records in 00:35:32.635 5000+0 records out 00:35:32.635 10240000 bytes (10 MB, 9.8 MiB) copied, 0.016896 s, 606 MB/s 00:35:32.635 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:32.635 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.635 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.635 AIO0 00:35:32.635 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.635 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:32.635 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.636 [2024-12-10 06:00:50.378839] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.636 [2024-12-10 06:00:50.419126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 392064 0 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392064 0 idle 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:32.636 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392064 root 20 0 128.2g 47616 34560 R 0.0 0.0 0:00.26 reactor_0' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392064 root 20 0 128.2g 47616 34560 R 0.0 0.0 0:00.26 reactor_0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 392064 1 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392064 1 idle 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392111 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.00 reactor_1' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392111 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.00 reactor_1 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=392313 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 392064 0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 392064 0 busy 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:32.895 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392064 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0' 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392064 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 392064 1 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 392064 1 busy 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:33.154 06:00:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392111 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1' 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392111 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:33.412 06:00:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 392313 00:35:43.370 Initializing NVMe Controllers 00:35:43.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.370 Controller IO queue size 256, less than required. 00:35:43.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:43.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:43.370 Initialization complete. Launching workers. 00:35:43.370 ======================================================== 00:35:43.370 Latency(us) 00:35:43.370 Device Information : IOPS MiB/s Average min max 00:35:43.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16569.39 64.72 15458.63 3414.51 29459.39 00:35:43.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16404.29 64.08 15610.86 7873.78 26475.79 00:35:43.370 ======================================================== 00:35:43.370 Total : 32973.68 128.80 15534.36 3414.51 29459.39 00:35:43.370 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 392064 0 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392064 0 idle 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:43.370 06:01:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392064 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0' 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392064 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 392064 1 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392064 1 idle 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392111 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392111 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:43.370 06:01:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:43.937 06:01:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:43.937 06:01:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:43.937 06:01:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:43.937 06:01:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:43.937 06:01:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 392064 0 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392064 0 idle 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:45.841 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392064 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.49 reactor_0' 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392064 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.49 reactor_0 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 392064 1 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392064 1 idle 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392064 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392064 -w 256 00:35:46.100 06:01:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392111 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1' 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392111 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:46.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.360 rmmod nvme_tcp 00:35:46.360 rmmod nvme_fabrics 00:35:46.360 rmmod nvme_keyring 00:35:46.360 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 392064 ']' 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 392064 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 392064 ']' 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 392064 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:46.618 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.619 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392064 00:35:46.619 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.619 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.619 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392064' 00:35:46.619 killing process with pid 392064 00:35:46.619 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 392064 00:35:46.619 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 392064 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:46.877 06:01:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.779 06:01:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.779 00:35:48.779 real 0m24.200s 00:35:48.779 user 0m39.947s 00:35:48.779 sys 0m9.098s 00:35:48.779 06:01:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.779 06:01:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:48.779 ************************************ 00:35:48.779 END TEST nvmf_interrupt 00:35:48.779 ************************************ 00:35:48.779 00:35:48.779 real 28m38.103s 00:35:48.779 user 57m28.571s 00:35:48.779 sys 9m59.635s 00:35:48.779 06:01:06 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.779 06:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:48.779 ************************************ 00:35:48.779 END TEST nvmf_tcp 00:35:48.779 ************************************ 00:35:49.038 06:01:06 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:49.038 06:01:06 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:49.038 06:01:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.038 06:01:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.038 06:01:06 -- common/autotest_common.sh@10 -- # set +x 00:35:49.038 ************************************ 00:35:49.038 START TEST spdkcli_nvmf_tcp 00:35:49.038 ************************************ 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:49.038 * Looking for test storage... 00:35:49.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.038 --rc genhtml_branch_coverage=1 00:35:49.038 --rc genhtml_function_coverage=1 00:35:49.038 --rc genhtml_legend=1 00:35:49.038 --rc geninfo_all_blocks=1 00:35:49.038 --rc geninfo_unexecuted_blocks=1 00:35:49.038 00:35:49.038 ' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.038 --rc genhtml_branch_coverage=1 00:35:49.038 --rc genhtml_function_coverage=1 00:35:49.038 --rc genhtml_legend=1 00:35:49.038 --rc geninfo_all_blocks=1 00:35:49.038 --rc geninfo_unexecuted_blocks=1 00:35:49.038 00:35:49.038 ' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.038 --rc genhtml_branch_coverage=1 00:35:49.038 --rc genhtml_function_coverage=1 00:35:49.038 --rc genhtml_legend=1 00:35:49.038 --rc geninfo_all_blocks=1 00:35:49.038 --rc geninfo_unexecuted_blocks=1 00:35:49.038 00:35:49.038 ' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.038 --rc genhtml_branch_coverage=1 00:35:49.038 --rc genhtml_function_coverage=1 00:35:49.038 --rc genhtml_legend=1 00:35:49.038 --rc geninfo_all_blocks=1 00:35:49.038 --rc geninfo_unexecuted_blocks=1 00:35:49.038 00:35:49.038 ' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.038 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.296 06:01:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.296 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:49.296 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:49.296 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:49.296 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=395090 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 395090 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 395090 ']' 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.297 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.297 [2024-12-10 06:01:07.054687] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:35:49.297 [2024-12-10 06:01:07.054733] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395090 ] 00:35:49.297 [2024-12-10 06:01:07.130556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:49.297 [2024-12-10 06:01:07.170440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.297 [2024-12-10 06:01:07.170443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.554 06:01:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:49.554 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:49.554 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:49.554 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:49.554 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:49.554 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:49.554 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:49.554 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:49.554 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:49.554 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:49.554 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:49.554 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:49.555 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:49.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:49.555 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:49.555 ' 00:35:52.081 [2024-12-10 06:01:10.009864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.460 [2024-12-10 06:01:11.350222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:55.985 [2024-12-10 06:01:13.829857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:58.506 [2024-12-10 06:01:16.012760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:59.879 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:59.879 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:59.879 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:59.879 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:59.879 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:59.879 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:59.879 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:59.879 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:59.879 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:59.879 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:59.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:59.879 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:59.879 06:01:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.445 06:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:00.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:00.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:00.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:00.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:00.445 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:00.445 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.445 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:00.445 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:00.445 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:00.445 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:00.445 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:00.445 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:00.445 ' 00:36:07.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:07.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:07.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:07.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:07.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:07.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:07.000 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:07.000 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:07.000 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:07.000 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:07.000 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:07.000 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:07.000 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:07.000 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 395090 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 395090 ']' 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 395090 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395090 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395090' 00:36:07.000 killing process with pid 395090 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 395090 00:36:07.000 06:01:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 395090 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 395090 ']' 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 395090 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 395090 ']' 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 395090 00:36:07.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (395090) - No such process 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 395090 is not found' 00:36:07.000 Process with pid 395090 is not found 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:07.000 00:36:07.000 real 0m17.369s 00:36:07.000 user 0m38.294s 00:36:07.000 sys 0m0.818s 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.000 06:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.000 ************************************ 00:36:07.000 END TEST spdkcli_nvmf_tcp 00:36:07.000 ************************************ 00:36:07.000 06:01:24 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:07.000 06:01:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:07.000 06:01:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:07.000 06:01:24 -- common/autotest_common.sh@10 -- # set +x 00:36:07.000 ************************************ 00:36:07.000 START TEST nvmf_identify_passthru 00:36:07.000 ************************************ 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:07.000 * Looking for test storage... 00:36:07.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.000 06:01:24 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:07.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.000 --rc genhtml_branch_coverage=1 00:36:07.000 --rc genhtml_function_coverage=1 00:36:07.000 --rc genhtml_legend=1 00:36:07.000 --rc geninfo_all_blocks=1 00:36:07.000 --rc geninfo_unexecuted_blocks=1 00:36:07.000 00:36:07.000 ' 00:36:07.000 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:07.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.000 --rc genhtml_branch_coverage=1 00:36:07.001 --rc genhtml_function_coverage=1 00:36:07.001 --rc genhtml_legend=1 00:36:07.001 --rc geninfo_all_blocks=1 00:36:07.001 --rc geninfo_unexecuted_blocks=1 00:36:07.001 00:36:07.001 ' 00:36:07.001 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.001 --rc genhtml_branch_coverage=1 00:36:07.001 --rc genhtml_function_coverage=1 00:36:07.001 --rc genhtml_legend=1 00:36:07.001 --rc geninfo_all_blocks=1 00:36:07.001 --rc geninfo_unexecuted_blocks=1 00:36:07.001 00:36:07.001 ' 00:36:07.001 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.001 --rc genhtml_branch_coverage=1 00:36:07.001 --rc genhtml_function_coverage=1 00:36:07.001 --rc genhtml_legend=1 00:36:07.001 --rc geninfo_all_blocks=1 00:36:07.001 --rc geninfo_unexecuted_blocks=1 00:36:07.001 00:36:07.001 ' 00:36:07.001 06:01:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:07.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.001 06:01:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.001 06:01:24 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:07.001 06:01:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.001 06:01:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.001 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:07.001 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:07.001 06:01:24 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.001 06:01:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:13.567 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:13.568 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:13.568 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:13.568 Found net devices under 0000:af:00.0: cvl_0_0 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:13.568 Found net devices under 0000:af:00.1: cvl_0_1 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.568 06:01:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:36:13.568 00:36:13.568 --- 10.0.0.2 ping statistics --- 00:36:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.568 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:36:13.568 00:36:13.568 --- 10.0.0.1 ping statistics --- 00:36:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.568 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:13.568 06:01:31 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:13.568 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:13.568 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:13.568 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:13.569 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:13.569 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:13.569 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:13.569 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:13.569 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:36:13.569 06:01:31 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:36:13.569 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:13.569 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:13.569 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:13.569 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:13.569 06:01:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:17.754 06:01:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:36:17.754 06:01:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:17.754 06:01:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:17.754 06:01:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=402566 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:21.940 06:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 402566 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 402566 ']' 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.940 06:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:21.940 [2024-12-10 06:01:39.666848] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:36:21.940 [2024-12-10 06:01:39.666893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.940 [2024-12-10 06:01:39.751754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:21.940 [2024-12-10 06:01:39.793298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.941 [2024-12-10 06:01:39.793337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.941 [2024-12-10 06:01:39.793344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.941 [2024-12-10 06:01:39.793349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.941 [2024-12-10 06:01:39.793355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.941 [2024-12-10 06:01:39.794869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.941 [2024-12-10 06:01:39.794980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:21.941 [2024-12-10 06:01:39.795085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.941 [2024-12-10 06:01:39.795087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:22.875 06:01:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.875 INFO: Log level set to 20 00:36:22.875 INFO: Requests: 00:36:22.875 { 00:36:22.875 "jsonrpc": "2.0", 00:36:22.875 "method": "nvmf_set_config", 00:36:22.875 "id": 1, 00:36:22.875 "params": { 00:36:22.875 "admin_cmd_passthru": { 00:36:22.875 "identify_ctrlr": true 00:36:22.875 } 00:36:22.875 } 00:36:22.875 } 00:36:22.875 00:36:22.875 INFO: response: 00:36:22.875 { 00:36:22.875 "jsonrpc": "2.0", 00:36:22.875 "id": 1, 00:36:22.875 "result": true 00:36:22.875 } 00:36:22.875 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.875 06:01:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.875 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.875 INFO: Setting log level to 20 00:36:22.875 INFO: Setting log level to 20 00:36:22.875 INFO: Log level set to 20 00:36:22.875 INFO: Log level set to 20 00:36:22.875 INFO: Requests: 00:36:22.875 { 00:36:22.875 "jsonrpc": "2.0", 00:36:22.875 "method": "framework_start_init", 00:36:22.876 "id": 1 00:36:22.876 } 00:36:22.876 00:36:22.876 INFO: Requests: 00:36:22.876 { 00:36:22.876 "jsonrpc": "2.0", 00:36:22.876 "method": "framework_start_init", 00:36:22.876 "id": 1 00:36:22.876 } 00:36:22.876 00:36:22.876 [2024-12-10 06:01:40.593757] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:22.876 INFO: response: 00:36:22.876 { 00:36:22.876 "jsonrpc": "2.0", 00:36:22.876 "id": 1, 00:36:22.876 "result": true 00:36:22.876 } 00:36:22.876 00:36:22.876 INFO: response: 00:36:22.876 { 00:36:22.876 "jsonrpc": "2.0", 00:36:22.876 "id": 1, 00:36:22.876 "result": true 00:36:22.876 } 00:36:22.876 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.876 06:01:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.876 INFO: Setting log level to 40 00:36:22.876 INFO: Setting log level to 40 00:36:22.876 INFO: Setting log level to 40 00:36:22.876 [2024-12-10 06:01:40.607026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.876 06:01:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.876 06:01:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.876 06:01:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.215 Nvme0n1 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.215 [2024-12-10 06:01:43.516916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.215 [ 00:36:26.215 { 00:36:26.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:26.215 "subtype": "Discovery", 00:36:26.215 "listen_addresses": [], 00:36:26.215 "allow_any_host": true, 00:36:26.215 "hosts": [] 00:36:26.215 }, 00:36:26.215 { 00:36:26.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:26.215 "subtype": "NVMe", 00:36:26.215 "listen_addresses": [ 00:36:26.215 { 00:36:26.215 "trtype": "TCP", 00:36:26.215 "adrfam": "IPv4", 00:36:26.215 "traddr": "10.0.0.2", 00:36:26.215 "trsvcid": "4420" 00:36:26.215 } 00:36:26.215 ], 00:36:26.215 "allow_any_host": true, 00:36:26.215 "hosts": [], 00:36:26.215 "serial_number": "SPDK00000000000001", 00:36:26.215 "model_number": "SPDK bdev Controller", 00:36:26.215 "max_namespaces": 1, 00:36:26.215 "min_cntlid": 1, 00:36:26.215 "max_cntlid": 65519, 00:36:26.215 "namespaces": [ 00:36:26.215 { 00:36:26.215 "nsid": 1, 00:36:26.215 "bdev_name": "Nvme0n1", 00:36:26.215 "name": "Nvme0n1", 00:36:26.215 "nguid": "8F26D1FA4A2044D4BBFEEC8F1CA8B6D3", 00:36:26.215 "uuid": "8f26d1fa-4a20-44d4-bbfe-ec8f1ca8b6d3" 00:36:26.215 } 00:36:26.215 ] 00:36:26.215 } 00:36:26.215 ] 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:26.215 06:01:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.215 rmmod nvme_tcp 00:36:26.215 rmmod nvme_fabrics 00:36:26.215 rmmod nvme_keyring 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 402566 ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 402566 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 402566 ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 402566 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402566 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402566' 00:36:26.215 killing process with pid 402566 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 402566 00:36:26.215 06:01:43 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 402566 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:27.667 06:01:45 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.667 06:01:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:27.667 06:01:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.572 06:01:47 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:29.572 00:36:29.572 real 0m23.242s 00:36:29.572 user 0m29.475s 00:36:29.572 sys 0m6.834s 00:36:29.572 06:01:47 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.572 06:01:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:29.572 ************************************ 00:36:29.572 END TEST nvmf_identify_passthru 00:36:29.572 ************************************ 00:36:29.572 06:01:47 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:29.572 06:01:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:29.572 06:01:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:29.572 06:01:47 -- common/autotest_common.sh@10 -- # set +x 00:36:29.831 ************************************ 00:36:29.831 START TEST nvmf_dif 00:36:29.831 ************************************ 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:29.831 * Looking for test storage... 00:36:29.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:29.831 06:01:47 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:29.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.831 --rc genhtml_branch_coverage=1 00:36:29.831 --rc genhtml_function_coverage=1 00:36:29.831 --rc genhtml_legend=1 00:36:29.831 --rc geninfo_all_blocks=1 00:36:29.831 --rc geninfo_unexecuted_blocks=1 00:36:29.831 00:36:29.831 ' 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:29.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.831 --rc genhtml_branch_coverage=1 00:36:29.831 --rc genhtml_function_coverage=1 00:36:29.831 --rc genhtml_legend=1 00:36:29.831 --rc geninfo_all_blocks=1 00:36:29.831 --rc geninfo_unexecuted_blocks=1 00:36:29.831 00:36:29.831 ' 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:29.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.831 --rc genhtml_branch_coverage=1 00:36:29.831 --rc genhtml_function_coverage=1 00:36:29.831 --rc genhtml_legend=1 00:36:29.831 --rc geninfo_all_blocks=1 00:36:29.831 --rc geninfo_unexecuted_blocks=1 00:36:29.831 00:36:29.831 ' 00:36:29.831 06:01:47 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:29.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.831 --rc genhtml_branch_coverage=1 00:36:29.831 --rc genhtml_function_coverage=1 00:36:29.831 --rc genhtml_legend=1 00:36:29.831 --rc geninfo_all_blocks=1 00:36:29.831 --rc geninfo_unexecuted_blocks=1 00:36:29.831 00:36:29.832 ' 00:36:29.832 06:01:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.832 06:01:47 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.832 06:01:47 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.832 06:01:47 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.832 06:01:47 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.832 06:01:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.832 06:01:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.832 06:01:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.832 06:01:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:29.832 06:01:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:29.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:29.832 06:01:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:29.832 06:01:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:29.832 06:01:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:29.832 06:01:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:29.832 06:01:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.832 06:01:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:29.832 06:01:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:29.832 06:01:47 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:29.832 06:01:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:36.396 06:01:54 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:36.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:36.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:36.397 Found net devices under 0000:af:00.0: cvl_0_0 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:36.397 Found net devices under 0000:af:00.1: cvl_0_1 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:36.397 06:01:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:36.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:36:36.656 00:36:36.656 --- 10.0.0.2 ping statistics --- 00:36:36.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.656 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:36.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:36:36.656 00:36:36.656 --- 10.0.0.1 ping statistics --- 00:36:36.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.656 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:36.656 06:01:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:39.943 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:39.943 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:39.943 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:39.943 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:39.943 06:01:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.943 06:01:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.943 06:01:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.943 06:01:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.943 06:01:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.943 06:01:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:40.201 06:01:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:40.201 06:01:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:40.201 06:01:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.201 06:01:57 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=408842 00:36:40.201 06:01:57 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 408842 00:36:40.201 06:01:57 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 408842 ']' 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.201 06:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.201 [2024-12-10 06:01:57.983599] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:36:40.201 [2024-12-10 06:01:57.983642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.201 [2024-12-10 06:01:58.067533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.201 [2024-12-10 06:01:58.107610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.201 [2024-12-10 06:01:58.107641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.201 [2024-12-10 06:01:58.107648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.201 [2024-12-10 06:01:58.107654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.201 [2024-12-10 06:01:58.107660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.201 [2024-12-10 06:01:58.108112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:40.459 06:01:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 06:01:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.459 06:01:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:40.459 06:01:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 [2024-12-10 06:01:58.234826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.459 06:01:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 ************************************ 00:36:40.459 START TEST fio_dif_1_default 00:36:40.459 ************************************ 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 bdev_null0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.459 [2024-12-10 06:01:58.307125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.459 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.460 { 00:36:40.460 "params": { 00:36:40.460 "name": "Nvme$subsystem", 00:36:40.460 "trtype": "$TEST_TRANSPORT", 00:36:40.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.460 "adrfam": "ipv4", 00:36:40.460 "trsvcid": "$NVMF_PORT", 00:36:40.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.460 "hdgst": ${hdgst:-false}, 00:36:40.460 "ddgst": ${ddgst:-false} 00:36:40.460 }, 00:36:40.460 "method": "bdev_nvme_attach_controller" 00:36:40.460 } 00:36:40.460 EOF 00:36:40.460 )") 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:40.460 "params": { 00:36:40.460 "name": "Nvme0", 00:36:40.460 "trtype": "tcp", 00:36:40.460 "traddr": "10.0.0.2", 00:36:40.460 "adrfam": "ipv4", 00:36:40.460 "trsvcid": "4420", 00:36:40.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.460 "hdgst": false, 00:36:40.460 "ddgst": false 00:36:40.460 }, 00:36:40.460 "method": "bdev_nvme_attach_controller" 00:36:40.460 }' 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.460 06:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.024 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.024 fio-3.35 00:36:41.024 Starting 1 thread 00:36:53.221 00:36:53.221 filename0: (groupid=0, jobs=1): err= 0: pid=409205: Tue Dec 10 06:02:09 2024 00:36:53.221 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:36:53.221 slat (nsec): min=5813, max=32149, avg=6297.04, stdev=1412.33 00:36:53.221 clat (usec): min=40767, max=42029, avg=41009.51, stdev=166.20 00:36:53.221 lat (usec): min=40773, max=42035, avg=41015.81, stdev=166.30 00:36:53.221 clat percentiles (usec): 00:36:53.221 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:53.221 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:53.221 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:53.221 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:53.221 | 99.99th=[42206] 00:36:53.221 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:36:53.221 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:53.221 lat (msec) : 50=100.00% 00:36:53.221 cpu : usr=92.33%, sys=7.42%, ctx=14, majf=0, minf=0 00:36:53.221 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.221 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.221 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:53.221 00:36:53.221 Run status group 0 (all jobs): 00:36:53.221 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.221 00:36:53.221 real 0m11.239s 00:36:53.221 user 0m16.139s 00:36:53.221 sys 0m1.114s 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 ************************************ 00:36:53.221 END TEST fio_dif_1_default 00:36:53.221 ************************************ 00:36:53.221 06:02:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:53.221 06:02:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:53.221 06:02:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 ************************************ 00:36:53.221 START TEST fio_dif_1_multi_subsystems 00:36:53.221 ************************************ 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 bdev_null0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 [2024-12-10 06:02:09.617959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.221 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.221 bdev_null1 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.222 { 00:36:53.222 "params": { 00:36:53.222 "name": "Nvme$subsystem", 00:36:53.222 "trtype": "$TEST_TRANSPORT", 00:36:53.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.222 "adrfam": "ipv4", 00:36:53.222 "trsvcid": "$NVMF_PORT", 00:36:53.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.222 "hdgst": ${hdgst:-false}, 00:36:53.222 "ddgst": ${ddgst:-false} 00:36:53.222 }, 00:36:53.222 "method": "bdev_nvme_attach_controller" 00:36:53.222 } 00:36:53.222 EOF 00:36:53.222 )") 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.222 { 00:36:53.222 "params": { 00:36:53.222 "name": "Nvme$subsystem", 00:36:53.222 "trtype": "$TEST_TRANSPORT", 00:36:53.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.222 "adrfam": "ipv4", 00:36:53.222 "trsvcid": "$NVMF_PORT", 00:36:53.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.222 "hdgst": ${hdgst:-false}, 00:36:53.222 "ddgst": ${ddgst:-false} 00:36:53.222 }, 00:36:53.222 "method": "bdev_nvme_attach_controller" 00:36:53.222 } 00:36:53.222 EOF 00:36:53.222 )") 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.222 "params": { 00:36:53.222 "name": "Nvme0", 00:36:53.222 "trtype": "tcp", 00:36:53.222 "traddr": "10.0.0.2", 00:36:53.222 "adrfam": "ipv4", 00:36:53.222 "trsvcid": "4420", 00:36:53.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.222 "hdgst": false, 00:36:53.222 "ddgst": false 00:36:53.222 }, 00:36:53.222 "method": "bdev_nvme_attach_controller" 00:36:53.222 },{ 00:36:53.222 "params": { 00:36:53.222 "name": "Nvme1", 00:36:53.222 "trtype": "tcp", 00:36:53.222 "traddr": "10.0.0.2", 00:36:53.222 "adrfam": "ipv4", 00:36:53.222 "trsvcid": "4420", 00:36:53.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.222 "hdgst": false, 00:36:53.222 "ddgst": false 00:36:53.222 }, 00:36:53.222 "method": "bdev_nvme_attach_controller" 00:36:53.222 }' 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:53.222 06:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.222 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:53.222 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:53.222 fio-3.35 00:36:53.222 Starting 2 threads 00:37:03.199 00:37:03.199 filename0: (groupid=0, jobs=1): err= 0: pid=411133: Tue Dec 10 06:02:20 2024 00:37:03.199 read: IOPS=198, BW=795KiB/s (814kB/s)(7952KiB/10004msec) 00:37:03.199 slat (nsec): min=5882, max=52521, avg=6925.56, stdev=2030.30 00:37:03.199 clat (usec): min=377, max=42597, avg=20108.38, stdev=20360.43 00:37:03.199 lat (usec): min=383, max=42603, avg=20115.31, stdev=20359.95 00:37:03.199 clat percentiles (usec): 00:37:03.199 | 1.00th=[ 388], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 404], 00:37:03.199 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 619], 60.00th=[40633], 00:37:03.199 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:37:03.199 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:03.199 | 99.99th=[42730] 00:37:03.199 bw ( KiB/s): min= 704, max= 896, per=67.29%, avg=798.32, stdev=51.68, samples=19 00:37:03.199 iops : min= 176, max= 224, avg=199.58, stdev=12.92, samples=19 00:37:03.199 lat (usec) : 500=43.16%, 750=8.15%, 1000=0.40% 00:37:03.199 lat (msec) : 50=48.29% 00:37:03.199 cpu : usr=96.44%, sys=3.31%, ctx=8, majf=0, minf=0 00:37:03.199 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.199 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.199 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:03.199 filename1: (groupid=0, jobs=1): err= 0: pid=411134: Tue Dec 10 06:02:20 2024 00:37:03.199 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10011msec) 00:37:03.199 slat (nsec): min=5875, max=53092, avg=7555.77, stdev=2762.90 00:37:03.199 clat (usec): min=481, max=42044, avg=40837.73, stdev=2589.01 00:37:03.199 lat (usec): min=488, max=42055, avg=40845.28, stdev=2589.04 00:37:03.199 clat percentiles (usec): 00:37:03.199 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:03.199 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:03.199 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:03.199 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:03.199 | 99.99th=[42206] 00:37:03.199 bw ( KiB/s): min= 384, max= 416, per=32.89%, avg=390.40, stdev=13.13, samples=20 00:37:03.199 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:37:03.199 lat (usec) : 500=0.31%, 750=0.10% 00:37:03.199 lat (msec) : 50=99.59% 00:37:03.199 cpu : usr=96.55%, sys=3.20%, ctx=12, majf=0, minf=9 00:37:03.199 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.199 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.199 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:03.199 00:37:03.199 Run status group 0 (all jobs): 00:37:03.199 READ: bw=1186KiB/s (1214kB/s), 392KiB/s-795KiB/s (401kB/s-814kB/s), io=11.6MiB (12.2MB), run=10004-10011msec 00:37:03.199 06:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:03.199 06:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:03.199 06:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.199 00:37:03.199 real 0m11.459s 00:37:03.199 user 0m26.363s 00:37:03.199 sys 0m1.053s 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.199 06:02:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.199 ************************************ 00:37:03.199 END TEST fio_dif_1_multi_subsystems 00:37:03.199 ************************************ 00:37:03.199 06:02:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:03.199 06:02:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:03.199 06:02:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.199 06:02:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.199 ************************************ 00:37:03.199 START TEST fio_dif_rand_params 00:37:03.199 ************************************ 00:37:03.199 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:03.199 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:03.199 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:03.199 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:03.199 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.200 bdev_null0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.200 [2024-12-10 06:02:21.142780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:03.200 { 00:37:03.200 "params": { 00:37:03.200 "name": "Nvme$subsystem", 00:37:03.200 "trtype": "$TEST_TRANSPORT", 00:37:03.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:03.200 "adrfam": "ipv4", 00:37:03.200 "trsvcid": "$NVMF_PORT", 00:37:03.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:03.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:03.200 "hdgst": ${hdgst:-false}, 00:37:03.200 "ddgst": ${ddgst:-false} 00:37:03.200 }, 00:37:03.200 "method": "bdev_nvme_attach_controller" 00:37:03.200 } 00:37:03.200 EOF 00:37:03.200 )") 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:03.200 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:03.459 "params": { 00:37:03.459 "name": "Nvme0", 00:37:03.459 "trtype": "tcp", 00:37:03.459 "traddr": "10.0.0.2", 00:37:03.459 "adrfam": "ipv4", 00:37:03.459 "trsvcid": "4420", 00:37:03.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:03.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:03.459 "hdgst": false, 00:37:03.459 "ddgst": false 00:37:03.459 }, 00:37:03.459 "method": "bdev_nvme_attach_controller" 00:37:03.459 }' 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:03.459 06:02:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.717 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:03.717 ... 00:37:03.717 fio-3.35 00:37:03.717 Starting 3 threads 00:37:10.279 00:37:10.279 filename0: (groupid=0, jobs=1): err= 0: pid=412957: Tue Dec 10 06:02:27 2024 00:37:10.279 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5045msec) 00:37:10.280 slat (nsec): min=6228, max=35979, avg=10709.91, stdev=1863.55 00:37:10.280 clat (usec): min=4975, max=51866, avg=9624.39, stdev=5572.36 00:37:10.280 lat (usec): min=4982, max=51878, avg=9635.10, stdev=5572.41 00:37:10.280 clat percentiles (usec): 00:37:10.280 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 7963], 00:37:10.280 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:37:10.280 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11076], 00:37:10.280 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:37:10.280 | 99.99th=[51643] 00:37:10.280 bw ( KiB/s): min=29952, max=44544, per=33.65%, avg=40012.80, stdev=4731.27, samples=10 00:37:10.280 iops : min= 234, max= 348, avg=312.60, stdev=36.96, samples=10 00:37:10.280 lat (msec) : 10=80.20%, 20=17.94%, 50=1.34%, 100=0.51% 00:37:10.280 cpu : usr=94.15%, sys=5.55%, ctx=8, majf=0, minf=37 00:37:10.280 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.280 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.280 filename0: (groupid=0, jobs=1): err= 0: pid=412958: Tue Dec 10 06:02:27 2024 00:37:10.280 read: IOPS=309, BW=38.6MiB/s (40.5MB/s)(195MiB/5043msec) 00:37:10.280 slat (nsec): min=6157, max=26400, avg=10679.51, stdev=1929.55 00:37:10.280 clat (usec): min=3596, max=50683, avg=9664.42, stdev=3366.49 00:37:10.280 lat (usec): min=3605, max=50694, avg=9675.10, stdev=3366.54 00:37:10.280 clat percentiles (usec): 00:37:10.280 | 1.00th=[ 3851], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 8029], 00:37:10.280 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:37:10.280 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11731], 95.00th=[12256], 00:37:10.280 | 99.00th=[13304], 99.50th=[44827], 99.90th=[50594], 99.95th=[50594], 00:37:10.280 | 99.99th=[50594] 00:37:10.280 bw ( KiB/s): min=36352, max=48128, per=33.52%, avg=39859.20, stdev=3432.59, samples=10 00:37:10.280 iops : min= 284, max= 376, avg=311.40, stdev=26.82, samples=10 00:37:10.280 lat (msec) : 4=1.22%, 10=56.06%, 20=42.21%, 50=0.38%, 100=0.13% 00:37:10.280 cpu : usr=94.25%, sys=5.47%, ctx=9, majf=0, minf=82 00:37:10.280 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.280 issued rwts: total=1559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.280 filename0: (groupid=0, jobs=1): err= 0: pid=412959: Tue Dec 10 06:02:27 2024 00:37:10.280 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(195MiB/5044msec) 00:37:10.280 slat (nsec): min=6142, max=26682, avg=10694.90, stdev=1764.72 00:37:10.280 clat (usec): min=3200, max=50845, avg=9647.08, stdev=5076.87 00:37:10.280 lat (usec): min=3207, max=50856, avg=9657.77, stdev=5077.01 00:37:10.280 clat percentiles (usec): 00:37:10.280 | 1.00th=[ 3982], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 8094], 00:37:10.280 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:37:10.280 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:37:10.280 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:37:10.280 | 99.99th=[50594] 00:37:10.280 bw ( KiB/s): min=35328, max=46848, per=33.58%, avg=39936.00, stdev=3363.91, samples=10 00:37:10.280 iops : min= 276, max= 366, avg=312.00, stdev=26.28, samples=10 00:37:10.280 lat (msec) : 4=1.09%, 10=71.45%, 20=25.99%, 50=1.15%, 100=0.32% 00:37:10.280 cpu : usr=94.35%, sys=5.37%, ctx=12, majf=0, minf=20 00:37:10.280 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.280 issued rwts: total=1562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.280 00:37:10.280 Run status group 0 (all jobs): 00:37:10.280 READ: bw=116MiB/s (122MB/s), 38.6MiB/s-38.8MiB/s (40.5MB/s-40.7MB/s), io=586MiB (614MB), run=5043-5045msec 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 bdev_null0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 [2024-12-10 06:02:27.285767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 bdev_null1 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 bdev_null2 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.280 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.281 { 00:37:10.281 "params": { 00:37:10.281 "name": "Nvme$subsystem", 00:37:10.281 "trtype": "$TEST_TRANSPORT", 00:37:10.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.281 "adrfam": "ipv4", 00:37:10.281 "trsvcid": "$NVMF_PORT", 00:37:10.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.281 "hdgst": ${hdgst:-false}, 00:37:10.281 "ddgst": ${ddgst:-false} 00:37:10.281 }, 00:37:10.281 "method": "bdev_nvme_attach_controller" 00:37:10.281 } 00:37:10.281 EOF 00:37:10.281 )") 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.281 { 00:37:10.281 "params": { 00:37:10.281 "name": "Nvme$subsystem", 00:37:10.281 "trtype": "$TEST_TRANSPORT", 00:37:10.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.281 "adrfam": "ipv4", 00:37:10.281 "trsvcid": "$NVMF_PORT", 00:37:10.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.281 "hdgst": ${hdgst:-false}, 00:37:10.281 "ddgst": ${ddgst:-false} 00:37:10.281 }, 00:37:10.281 "method": "bdev_nvme_attach_controller" 00:37:10.281 } 00:37:10.281 EOF 00:37:10.281 )") 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.281 { 00:37:10.281 "params": { 00:37:10.281 "name": "Nvme$subsystem", 00:37:10.281 "trtype": "$TEST_TRANSPORT", 00:37:10.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.281 "adrfam": "ipv4", 00:37:10.281 "trsvcid": "$NVMF_PORT", 00:37:10.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.281 "hdgst": ${hdgst:-false}, 00:37:10.281 "ddgst": ${ddgst:-false} 00:37:10.281 }, 00:37:10.281 "method": "bdev_nvme_attach_controller" 00:37:10.281 } 00:37:10.281 EOF 00:37:10.281 )") 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:10.281 "params": { 00:37:10.281 "name": "Nvme0", 00:37:10.281 "trtype": "tcp", 00:37:10.281 "traddr": "10.0.0.2", 00:37:10.281 "adrfam": "ipv4", 00:37:10.281 "trsvcid": "4420", 00:37:10.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.281 "hdgst": false, 00:37:10.281 "ddgst": false 00:37:10.281 }, 00:37:10.281 "method": "bdev_nvme_attach_controller" 00:37:10.281 },{ 00:37:10.281 "params": { 00:37:10.281 "name": "Nvme1", 00:37:10.281 "trtype": "tcp", 00:37:10.281 "traddr": "10.0.0.2", 00:37:10.281 "adrfam": "ipv4", 00:37:10.281 "trsvcid": "4420", 00:37:10.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.281 "hdgst": false, 00:37:10.281 "ddgst": false 00:37:10.281 }, 00:37:10.281 "method": "bdev_nvme_attach_controller" 00:37:10.281 },{ 00:37:10.281 "params": { 00:37:10.281 "name": "Nvme2", 00:37:10.281 "trtype": "tcp", 00:37:10.281 "traddr": "10.0.0.2", 00:37:10.281 "adrfam": "ipv4", 00:37:10.281 "trsvcid": "4420", 00:37:10.281 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:10.281 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:10.281 "hdgst": false, 00:37:10.281 "ddgst": false 00:37:10.281 }, 00:37:10.281 "method": "bdev_nvme_attach_controller" 00:37:10.281 }' 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:10.281 06:02:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.281 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.281 ... 00:37:10.281 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.281 ... 00:37:10.281 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.281 ... 00:37:10.281 fio-3.35 00:37:10.281 Starting 24 threads 00:37:22.477 00:37:22.477 filename0: (groupid=0, jobs=1): err= 0: pid=414134: Tue Dec 10 06:02:38 2024 00:37:22.477 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:37:22.477 slat (nsec): min=7484, max=44497, avg=12171.97, stdev=4035.12 00:37:22.477 clat (usec): min=12846, max=36577, avg=30220.72, stdev=1363.90 00:37:22.477 lat (usec): min=12859, max=36609, avg=30232.89, stdev=1363.80 00:37:22.477 clat percentiles (usec): 00:37:22.477 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:37:22.477 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.477 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:37:22.477 | 99.00th=[31327], 99.50th=[32113], 99.90th=[36439], 99.95th=[36439], 00:37:22.477 | 99.99th=[36439] 00:37:22.477 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.35, stdev=65.06, samples=20 00:37:22.477 iops : min= 512, max= 544, avg=526.30, stdev=16.23, samples=20 00:37:22.477 lat (msec) : 20=0.61%, 50=99.39% 00:37:22.477 cpu : usr=98.53%, sys=1.09%, ctx=16, majf=0, minf=9 00:37:22.477 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.477 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.477 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.477 filename0: (groupid=0, jobs=1): err= 0: pid=414135: Tue Dec 10 06:02:38 2024 00:37:22.477 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:37:22.477 slat (nsec): min=8905, max=90767, avg=37885.84, stdev=19140.20 00:37:22.477 clat (usec): min=10730, max=57906, avg=30149.03, stdev=1973.15 00:37:22.477 lat (usec): min=10745, max=57918, avg=30186.92, stdev=1971.34 00:37:22.477 clat percentiles (usec): 00:37:22.477 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:37:22.477 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:37:22.477 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.477 | 99.00th=[31065], 99.50th=[31589], 99.90th=[57934], 99.95th=[57934], 00:37:22.477 | 99.99th=[57934] 00:37:22.477 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2099.35, stdev=76.21, samples=20 00:37:22.477 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.478 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.478 cpu : usr=98.45%, sys=1.13%, ctx=13, majf=0, minf=9 00:37:22.478 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename0: (groupid=0, jobs=1): err= 0: pid=414136: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10013msec) 00:37:22.478 slat (usec): min=3, max=105, avg=21.49, stdev=12.13 00:37:22.478 clat (usec): min=19607, max=42420, avg=30251.26, stdev=911.44 00:37:22.478 lat (usec): min=19670, max=42434, avg=30272.75, stdev=908.47 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.478 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:22.478 | 99.00th=[31065], 99.50th=[31589], 99.90th=[42206], 99.95th=[42206], 00:37:22.478 | 99.99th=[42206] 00:37:22.478 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:37:22.478 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:37:22.478 lat (msec) : 20=0.13%, 50=99.87% 00:37:22.478 cpu : usr=98.60%, sys=1.00%, ctx=15, majf=0, minf=9 00:37:22.478 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename0: (groupid=0, jobs=1): err= 0: pid=414137: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10013msec) 00:37:22.478 slat (nsec): min=7499, max=43864, avg=20365.93, stdev=6237.22 00:37:22.478 clat (usec): min=23112, max=37220, avg=30254.54, stdev=1119.42 00:37:22.478 lat (usec): min=23143, max=37233, avg=30274.90, stdev=1119.63 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[23725], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.478 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:22.478 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:37:22.478 | 99.99th=[36963] 00:37:22.478 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:37:22.478 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:37:22.478 lat (msec) : 50=100.00% 00:37:22.478 cpu : usr=98.44%, sys=1.17%, ctx=12, majf=0, minf=9 00:37:22.478 IO depths : 1=5.6%, 2=11.7%, 4=24.4%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename0: (groupid=0, jobs=1): err= 0: pid=414138: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=528, BW=2114KiB/s (2164kB/s)(20.7MiB/10022msec) 00:37:22.478 slat (nsec): min=7625, max=73546, avg=24564.26, stdev=14733.77 00:37:22.478 clat (usec): min=12858, max=32217, avg=30074.72, stdev=1416.25 00:37:22.478 lat (usec): min=12889, max=32231, avg=30099.28, stdev=1415.27 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[21627], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:37:22.478 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.478 | 99.00th=[31065], 99.50th=[31065], 99.90th=[32113], 99.95th=[32113], 00:37:22.478 | 99.99th=[32113] 00:37:22.478 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=77.69, samples=20 00:37:22.478 iops : min= 512, max= 576, avg=528.00, stdev=19.42, samples=20 00:37:22.478 lat (msec) : 20=0.60%, 50=99.40% 00:37:22.478 cpu : usr=98.36%, sys=1.25%, ctx=12, majf=0, minf=9 00:37:22.478 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename0: (groupid=0, jobs=1): err= 0: pid=414139: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10011msec) 00:37:22.478 slat (nsec): min=6656, max=40562, avg=19927.58, stdev=5762.68 00:37:22.478 clat (usec): min=23261, max=37328, avg=30255.59, stdev=806.42 00:37:22.478 lat (usec): min=23276, max=37355, avg=30275.52, stdev=806.51 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[29230], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.478 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:22.478 | 99.00th=[31851], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:37:22.478 | 99.99th=[37487] 00:37:22.478 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.40, stdev=64.17, samples=20 00:37:22.478 iops : min= 512, max= 544, avg=524.85, stdev=16.04, samples=20 00:37:22.478 lat (msec) : 50=100.00% 00:37:22.478 cpu : usr=98.46%, sys=1.15%, ctx=13, majf=0, minf=9 00:37:22.478 IO depths : 1=5.6%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename0: (groupid=0, jobs=1): err= 0: pid=414140: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:37:22.478 slat (nsec): min=7078, max=90246, avg=42289.22, stdev=19235.47 00:37:22.478 clat (usec): min=10419, max=58560, avg=30104.72, stdev=2004.86 00:37:22.478 lat (usec): min=10478, max=58576, avg=30147.01, stdev=2002.44 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:37:22.478 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:22.478 | 99.00th=[31065], 99.50th=[31589], 99.90th=[58459], 99.95th=[58459], 00:37:22.478 | 99.99th=[58459] 00:37:22.478 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2099.20, stdev=76.58, samples=20 00:37:22.478 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.478 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.478 cpu : usr=98.42%, sys=1.19%, ctx=15, majf=0, minf=9 00:37:22.478 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename0: (groupid=0, jobs=1): err= 0: pid=414141: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=536, BW=2145KiB/s (2197kB/s)(21.0MiB/10016msec) 00:37:22.478 slat (nsec): min=7437, max=41718, avg=13435.66, stdev=5032.01 00:37:22.478 clat (usec): min=11883, max=37921, avg=29718.75, stdev=2560.22 00:37:22.478 lat (usec): min=11892, max=37929, avg=29732.19, stdev=2560.83 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[19006], 5.00th=[23200], 10.00th=[30016], 20.00th=[30278], 00:37:22.478 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.478 | 99.00th=[31851], 99.50th=[34341], 99.90th=[37487], 99.95th=[38011], 00:37:22.478 | 99.99th=[38011] 00:37:22.478 bw ( KiB/s): min= 2048, max= 2784, per=4.24%, avg=2142.40, stdev=163.41, samples=20 00:37:22.478 iops : min= 512, max= 696, avg=535.60, stdev=40.85, samples=20 00:37:22.478 lat (msec) : 20=3.15%, 50=96.85% 00:37:22.478 cpu : usr=98.45%, sys=1.16%, ctx=13, majf=0, minf=9 00:37:22.478 IO depths : 1=4.7%, 2=10.6%, 4=23.7%, 8=53.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename1: (groupid=0, jobs=1): err= 0: pid=414142: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:37:22.478 slat (nsec): min=6849, max=81262, avg=37316.14, stdev=16850.46 00:37:22.478 clat (usec): min=10365, max=58585, avg=30100.49, stdev=2163.63 00:37:22.478 lat (usec): min=10407, max=58606, avg=30137.81, stdev=2162.59 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[23462], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:37:22.478 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:22.478 | 99.00th=[31327], 99.50th=[37487], 99.90th=[58459], 99.95th=[58459], 00:37:22.478 | 99.99th=[58459] 00:37:22.478 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2099.20, stdev=74.07, samples=20 00:37:22.478 iops : min= 480, max= 544, avg=524.80, stdev=18.52, samples=20 00:37:22.478 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.478 cpu : usr=98.84%, sys=0.64%, ctx=55, majf=0, minf=9 00:37:22.478 IO depths : 1=4.8%, 2=11.0%, 4=24.9%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:22.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.478 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.478 filename1: (groupid=0, jobs=1): err= 0: pid=414143: Tue Dec 10 06:02:38 2024 00:37:22.478 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:37:22.478 slat (nsec): min=4992, max=90382, avg=44127.20, stdev=18980.71 00:37:22.478 clat (usec): min=10308, max=58435, avg=30056.14, stdev=2003.60 00:37:22.478 lat (usec): min=10346, max=58450, avg=30100.27, stdev=2001.52 00:37:22.478 clat percentiles (usec): 00:37:22.478 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:37:22.478 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:37:22.478 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:22.478 | 99.00th=[31065], 99.50th=[31589], 99.90th=[58459], 99.95th=[58459], 00:37:22.478 | 99.99th=[58459] 00:37:22.479 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2099.20, stdev=76.58, samples=20 00:37:22.479 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.479 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.479 cpu : usr=98.41%, sys=1.20%, ctx=13, majf=0, minf=9 00:37:22.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename1: (groupid=0, jobs=1): err= 0: pid=414144: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:37:22.479 slat (nsec): min=4425, max=89138, avg=29778.10, stdev=16771.54 00:37:22.479 clat (usec): min=19122, max=39654, avg=30212.97, stdev=852.95 00:37:22.479 lat (usec): min=19172, max=39667, avg=30242.75, stdev=849.46 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:37:22.479 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.479 | 99.00th=[31065], 99.50th=[31851], 99.90th=[39584], 99.95th=[39584], 00:37:22.479 | 99.99th=[39584] 00:37:22.479 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:37:22.479 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:37:22.479 lat (msec) : 20=0.30%, 50=99.70% 00:37:22.479 cpu : usr=98.57%, sys=1.04%, ctx=16, majf=0, minf=9 00:37:22.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename1: (groupid=0, jobs=1): err= 0: pid=414145: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:37:22.479 slat (nsec): min=8597, max=89435, avg=38517.44, stdev=19639.75 00:37:22.479 clat (usec): min=10761, max=66110, avg=30150.78, stdev=2120.69 00:37:22.479 lat (usec): min=10785, max=66126, avg=30189.29, stdev=2119.31 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[28705], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:37:22.479 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.479 | 99.00th=[31327], 99.50th=[36963], 99.90th=[58459], 99.95th=[58459], 00:37:22.479 | 99.99th=[66323] 00:37:22.479 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2099.20, stdev=76.58, samples=20 00:37:22.479 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.479 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.479 cpu : usr=98.66%, sys=0.95%, ctx=13, majf=0, minf=9 00:37:22.479 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename1: (groupid=0, jobs=1): err= 0: pid=414146: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10009msec) 00:37:22.479 slat (nsec): min=7484, max=47382, avg=16292.64, stdev=5968.01 00:37:22.479 clat (usec): min=11878, max=48792, avg=30098.62, stdev=2080.90 00:37:22.479 lat (usec): min=11887, max=48808, avg=30114.91, stdev=2080.83 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[20841], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:37:22.479 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:22.479 | 99.00th=[36439], 99.50th=[37487], 99.90th=[48497], 99.95th=[48497], 00:37:22.479 | 99.99th=[49021] 00:37:22.479 bw ( KiB/s): min= 2048, max= 2272, per=4.18%, avg=2112.80, stdev=76.73, samples=20 00:37:22.479 iops : min= 512, max= 568, avg=528.20, stdev=19.18, samples=20 00:37:22.479 lat (msec) : 20=0.79%, 50=99.21% 00:37:22.479 cpu : usr=98.29%, sys=1.31%, ctx=12, majf=0, minf=9 00:37:22.479 IO depths : 1=4.4%, 2=10.5%, 4=24.3%, 8=52.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename1: (groupid=0, jobs=1): err= 0: pid=414147: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=528, BW=2114KiB/s (2164kB/s)(20.7MiB/10022msec) 00:37:22.479 slat (nsec): min=7702, max=73001, avg=21734.66, stdev=14531.71 00:37:22.479 clat (usec): min=12899, max=31953, avg=30090.63, stdev=1418.79 00:37:22.479 lat (usec): min=12940, max=31966, avg=30112.36, stdev=1418.06 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[21627], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:37:22.479 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.479 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:37:22.479 | 99.99th=[31851] 00:37:22.479 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=77.69, samples=20 00:37:22.479 iops : min= 512, max= 576, avg=528.00, stdev=19.42, samples=20 00:37:22.479 lat (msec) : 20=0.60%, 50=99.40% 00:37:22.479 cpu : usr=98.58%, sys=1.04%, ctx=14, majf=0, minf=9 00:37:22.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename1: (groupid=0, jobs=1): err= 0: pid=414148: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=528, BW=2114KiB/s (2164kB/s)(20.7MiB/10022msec) 00:37:22.479 slat (nsec): min=7544, max=73506, avg=26450.39, stdev=15057.66 00:37:22.479 clat (usec): min=12931, max=48928, avg=30036.80, stdev=1823.91 00:37:22.479 lat (usec): min=12953, max=48944, avg=30063.25, stdev=1823.64 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[20841], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:37:22.479 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.479 | 99.00th=[31065], 99.50th=[31851], 99.90th=[49021], 99.95th=[49021], 00:37:22.479 | 99.99th=[49021] 00:37:22.479 bw ( KiB/s): min= 2048, max= 2192, per=4.18%, avg=2112.00, stdev=65.87, samples=20 00:37:22.479 iops : min= 512, max= 548, avg=528.00, stdev=16.47, samples=20 00:37:22.479 lat (msec) : 20=0.87%, 50=99.13% 00:37:22.479 cpu : usr=98.59%, sys=1.01%, ctx=13, majf=0, minf=9 00:37:22.479 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename1: (groupid=0, jobs=1): err= 0: pid=414149: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:37:22.479 slat (nsec): min=6668, max=73385, avg=26099.54, stdev=15143.41 00:37:22.479 clat (usec): min=21436, max=34238, avg=30161.01, stdev=607.39 00:37:22.479 lat (usec): min=21450, max=34257, avg=30187.11, stdev=606.15 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:37:22.479 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:22.479 | 99.00th=[31065], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:37:22.479 | 99.99th=[34341] 00:37:22.479 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=64.93, samples=19 00:37:22.479 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:37:22.479 lat (msec) : 50=100.00% 00:37:22.479 cpu : usr=98.24%, sys=1.37%, ctx=13, majf=0, minf=9 00:37:22.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename2: (groupid=0, jobs=1): err= 0: pid=414150: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:37:22.479 slat (nsec): min=4355, max=89066, avg=21678.47, stdev=12775.51 00:37:22.479 clat (usec): min=19613, max=39549, avg=30260.45, stdev=969.77 00:37:22.479 lat (usec): min=19629, max=39563, avg=30282.13, stdev=969.07 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.479 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.479 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:22.479 | 99.00th=[31327], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:37:22.479 | 99.99th=[39584] 00:37:22.479 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:37:22.479 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:37:22.479 lat (msec) : 20=0.30%, 50=99.70% 00:37:22.479 cpu : usr=98.51%, sys=1.09%, ctx=21, majf=0, minf=9 00:37:22.479 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:22.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.479 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.479 filename2: (groupid=0, jobs=1): err= 0: pid=414151: Tue Dec 10 06:02:38 2024 00:37:22.479 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:37:22.479 slat (nsec): min=7867, max=43335, avg=19475.09, stdev=5705.41 00:37:22.479 clat (usec): min=5677, max=37624, avg=30168.07, stdev=1742.76 00:37:22.479 lat (usec): min=5691, max=37647, avg=30187.55, stdev=1743.26 00:37:22.479 clat percentiles (usec): 00:37:22.479 | 1.00th=[23462], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.480 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:22.480 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:37:22.480 | 99.99th=[37487] 00:37:22.480 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.60, stdev=65.33, samples=20 00:37:22.480 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:37:22.480 lat (msec) : 10=0.17%, 20=0.13%, 50=99.70% 00:37:22.480 cpu : usr=98.24%, sys=1.36%, ctx=7, majf=0, minf=9 00:37:22.480 IO depths : 1=5.6%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 filename2: (groupid=0, jobs=1): err= 0: pid=414152: Tue Dec 10 06:02:38 2024 00:37:22.480 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:37:22.480 slat (nsec): min=4671, max=40785, avg=18877.22, stdev=6988.87 00:37:22.480 clat (usec): min=11718, max=51000, avg=30237.52, stdev=1572.93 00:37:22.480 lat (usec): min=11726, max=51014, avg=30256.40, stdev=1573.00 00:37:22.480 clat percentiles (usec): 00:37:22.480 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.480 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.480 | 99.00th=[31065], 99.50th=[31851], 99.90th=[51119], 99.95th=[51119], 00:37:22.480 | 99.99th=[51119] 00:37:22.480 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2099.20, stdev=76.58, samples=20 00:37:22.480 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.480 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:37:22.480 cpu : usr=98.44%, sys=1.16%, ctx=13, majf=0, minf=9 00:37:22.480 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 filename2: (groupid=0, jobs=1): err= 0: pid=414153: Tue Dec 10 06:02:38 2024 00:37:22.480 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.7MiB/10022msec) 00:37:22.480 slat (nsec): min=7459, max=73365, avg=25947.36, stdev=14949.97 00:37:22.480 clat (usec): min=12874, max=49523, avg=29969.46, stdev=2105.58 00:37:22.480 lat (usec): min=12904, max=49546, avg=29995.40, stdev=2106.24 00:37:22.480 clat percentiles (usec): 00:37:22.480 | 1.00th=[19006], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:37:22.480 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.480 | 99.00th=[31065], 99.50th=[32113], 99.90th=[49546], 99.95th=[49546], 00:37:22.480 | 99.99th=[49546] 00:37:22.480 bw ( KiB/s): min= 2048, max= 2288, per=4.19%, avg=2116.80, stdev=74.88, samples=20 00:37:22.480 iops : min= 512, max= 572, avg=529.20, stdev=18.72, samples=20 00:37:22.480 lat (msec) : 20=1.24%, 50=98.76% 00:37:22.480 cpu : usr=98.41%, sys=1.20%, ctx=13, majf=0, minf=9 00:37:22.480 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 filename2: (groupid=0, jobs=1): err= 0: pid=414154: Tue Dec 10 06:02:38 2024 00:37:22.480 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10012msec) 00:37:22.480 slat (nsec): min=7017, max=44638, avg=20212.18, stdev=6066.78 00:37:22.480 clat (usec): min=22026, max=43407, avg=30248.49, stdev=836.11 00:37:22.480 lat (usec): min=22067, max=43427, avg=30268.70, stdev=835.97 00:37:22.480 clat percentiles (usec): 00:37:22.480 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:37:22.480 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.480 | 99.00th=[31327], 99.50th=[32375], 99.90th=[43254], 99.95th=[43254], 00:37:22.480 | 99.99th=[43254] 00:37:22.480 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:37:22.480 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:37:22.480 lat (msec) : 50=100.00% 00:37:22.480 cpu : usr=98.36%, sys=1.25%, ctx=12, majf=0, minf=9 00:37:22.480 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 filename2: (groupid=0, jobs=1): err= 0: pid=414155: Tue Dec 10 06:02:38 2024 00:37:22.480 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:37:22.480 slat (nsec): min=5575, max=89627, avg=42760.25, stdev=19294.14 00:37:22.480 clat (usec): min=19690, max=38208, avg=30095.54, stdev=785.71 00:37:22.480 lat (usec): min=19706, max=38224, avg=30138.30, stdev=782.43 00:37:22.480 clat percentiles (usec): 00:37:22.480 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:37:22.480 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:22.480 | 99.00th=[31065], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:37:22.480 | 99.99th=[38011] 00:37:22.480 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.40, stdev=64.17, samples=20 00:37:22.480 iops : min= 512, max= 544, avg=524.85, stdev=16.04, samples=20 00:37:22.480 lat (msec) : 20=0.30%, 50=99.70% 00:37:22.480 cpu : usr=98.66%, sys=0.94%, ctx=13, majf=0, minf=9 00:37:22.480 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 filename2: (groupid=0, jobs=1): err= 0: pid=414156: Tue Dec 10 06:02:38 2024 00:37:22.480 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:37:22.480 slat (nsec): min=8060, max=92096, avg=44468.42, stdev=19487.34 00:37:22.480 clat (usec): min=10243, max=73367, avg=30030.53, stdev=2148.91 00:37:22.480 lat (usec): min=10257, max=73384, avg=30075.00, stdev=2147.48 00:37:22.480 clat percentiles (usec): 00:37:22.480 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:37:22.480 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:22.480 | 99.00th=[31327], 99.50th=[32113], 99.90th=[57934], 99.95th=[57934], 00:37:22.480 | 99.99th=[72877] 00:37:22.480 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2099.35, stdev=76.21, samples=20 00:37:22.480 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.480 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.480 cpu : usr=98.68%, sys=0.93%, ctx=12, majf=0, minf=9 00:37:22.480 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 filename2: (groupid=0, jobs=1): err= 0: pid=414157: Tue Dec 10 06:02:38 2024 00:37:22.480 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:37:22.480 slat (usec): min=7, max=104, avg=43.93, stdev=20.88 00:37:22.480 clat (usec): min=10321, max=56577, avg=30003.18, stdev=1923.85 00:37:22.480 lat (usec): min=10330, max=56615, avg=30047.11, stdev=1923.75 00:37:22.480 clat percentiles (usec): 00:37:22.480 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:37:22.480 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:22.480 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:22.480 | 99.00th=[31065], 99.50th=[31589], 99.90th=[56361], 99.95th=[56361], 00:37:22.480 | 99.99th=[56361] 00:37:22.480 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2099.20, stdev=76.58, samples=20 00:37:22.480 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:37:22.480 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:37:22.480 cpu : usr=98.59%, sys=1.00%, ctx=21, majf=0, minf=9 00:37:22.480 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.480 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.480 00:37:22.480 Run status group 0 (all jobs): 00:37:22.480 READ: bw=49.4MiB/s (51.8MB/s), 2103KiB/s-2145KiB/s (2153kB/s-2197kB/s), io=495MiB (519MB), run=10005-10022msec 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.480 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 bdev_null0 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 [2024-12-10 06:02:39.272620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 bdev_null1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.481 { 00:37:22.481 "params": { 00:37:22.481 "name": "Nvme$subsystem", 00:37:22.481 "trtype": "$TEST_TRANSPORT", 00:37:22.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.481 "adrfam": "ipv4", 00:37:22.481 "trsvcid": "$NVMF_PORT", 00:37:22.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.481 "hdgst": ${hdgst:-false}, 00:37:22.481 "ddgst": ${ddgst:-false} 00:37:22.481 }, 00:37:22.481 "method": "bdev_nvme_attach_controller" 00:37:22.481 } 00:37:22.481 EOF 00:37:22.481 )") 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.481 { 00:37:22.481 "params": { 00:37:22.481 "name": "Nvme$subsystem", 00:37:22.481 "trtype": "$TEST_TRANSPORT", 00:37:22.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.481 "adrfam": "ipv4", 00:37:22.481 "trsvcid": "$NVMF_PORT", 00:37:22.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.481 "hdgst": ${hdgst:-false}, 00:37:22.481 "ddgst": ${ddgst:-false} 00:37:22.481 }, 00:37:22.481 "method": "bdev_nvme_attach_controller" 00:37:22.481 } 00:37:22.481 EOF 00:37:22.481 )") 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:22.481 06:02:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:22.481 "params": { 00:37:22.481 "name": "Nvme0", 00:37:22.481 "trtype": "tcp", 00:37:22.481 "traddr": "10.0.0.2", 00:37:22.481 "adrfam": "ipv4", 00:37:22.481 "trsvcid": "4420", 00:37:22.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.481 "hdgst": false, 00:37:22.481 "ddgst": false 00:37:22.481 }, 00:37:22.481 "method": "bdev_nvme_attach_controller" 00:37:22.481 },{ 00:37:22.481 "params": { 00:37:22.481 "name": "Nvme1", 00:37:22.481 "trtype": "tcp", 00:37:22.481 "traddr": "10.0.0.2", 00:37:22.481 "adrfam": "ipv4", 00:37:22.481 "trsvcid": "4420", 00:37:22.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:22.482 "hdgst": false, 00:37:22.482 "ddgst": false 00:37:22.482 }, 00:37:22.482 "method": "bdev_nvme_attach_controller" 00:37:22.482 }' 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.482 06:02:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.482 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:22.482 ... 00:37:22.482 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:22.482 ... 00:37:22.482 fio-3.35 00:37:22.482 Starting 4 threads 00:37:27.755 00:37:27.755 filename0: (groupid=0, jobs=1): err= 0: pid=416082: Tue Dec 10 06:02:45 2024 00:37:27.755 read: IOPS=2653, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:37:27.755 slat (nsec): min=6081, max=42847, avg=9652.25, stdev=3842.78 00:37:27.755 clat (usec): min=589, max=42186, avg=2984.51, stdev=1022.57 00:37:27.755 lat (usec): min=601, max=42212, avg=2994.16, stdev=1022.69 00:37:27.755 clat percentiles (usec): 00:37:27.755 | 1.00th=[ 1893], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2802], 00:37:27.755 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:37:27.755 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3425], 00:37:27.755 | 99.00th=[ 4228], 99.50th=[ 4490], 99.90th=[ 5407], 99.95th=[42206], 00:37:27.755 | 99.99th=[42206] 00:37:27.755 bw ( KiB/s): min=19440, max=21904, per=25.06%, avg=21143.11, stdev=727.92, samples=9 00:37:27.755 iops : min= 2430, max= 2738, avg=2642.89, stdev=90.99, samples=9 00:37:27.755 lat (usec) : 750=0.01% 00:37:27.755 lat (msec) : 2=1.35%, 4=97.39%, 10=1.20%, 50=0.06% 00:37:27.755 cpu : usr=95.54%, sys=4.14%, ctx=14, majf=0, minf=9 00:37:27.755 IO depths : 1=0.8%, 2=6.0%, 4=66.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 issued rwts: total=13275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.755 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.755 filename0: (groupid=0, jobs=1): err= 0: pid=416083: Tue Dec 10 06:02:45 2024 00:37:27.755 read: IOPS=2618, BW=20.5MiB/s (21.5MB/s)(102MiB/5002msec) 00:37:27.755 slat (nsec): min=6079, max=38584, avg=10884.22, stdev=4305.78 00:37:27.755 clat (usec): min=663, max=5542, avg=3017.71, stdev=359.64 00:37:27.755 lat (usec): min=674, max=5554, avg=3028.59, stdev=359.46 00:37:27.755 clat percentiles (usec): 00:37:27.755 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:37:27.755 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:37:27.755 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3556], 00:37:27.755 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5342], 00:37:27.755 | 99.99th=[ 5473] 00:37:27.755 bw ( KiB/s): min=20240, max=21424, per=24.83%, avg=20950.40, stdev=320.02, samples=10 00:37:27.755 iops : min= 2530, max= 2678, avg=2618.80, stdev=40.00, samples=10 00:37:27.755 lat (usec) : 750=0.03%, 1000=0.02% 00:37:27.755 lat (msec) : 2=0.73%, 4=97.02%, 10=2.20% 00:37:27.755 cpu : usr=95.92%, sys=3.76%, ctx=6, majf=0, minf=9 00:37:27.755 IO depths : 1=0.6%, 2=11.0%, 4=62.4%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 issued rwts: total=13099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.755 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.755 filename1: (groupid=0, jobs=1): err= 0: pid=416084: Tue Dec 10 06:02:45 2024 00:37:27.755 read: IOPS=2597, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:37:27.755 slat (nsec): min=6101, max=41241, avg=10602.09, stdev=4219.03 00:37:27.755 clat (usec): min=579, max=5745, avg=3043.14, stdev=446.24 00:37:27.755 lat (usec): min=587, max=5751, avg=3053.74, stdev=446.09 00:37:27.755 clat percentiles (usec): 00:37:27.755 | 1.00th=[ 2073], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2933], 00:37:27.755 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:37:27.755 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3359], 95.00th=[ 3720], 00:37:27.755 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5538], 99.95th=[ 5604], 00:37:27.755 | 99.99th=[ 5669] 00:37:27.755 bw ( KiB/s): min=20160, max=21152, per=24.66%, avg=20805.33, stdev=365.47, samples=9 00:37:27.755 iops : min= 2520, max= 2644, avg=2600.67, stdev=45.68, samples=9 00:37:27.755 lat (usec) : 750=0.11%, 1000=0.16% 00:37:27.755 lat (msec) : 2=0.63%, 4=95.46%, 10=3.64% 00:37:27.755 cpu : usr=95.88%, sys=3.80%, ctx=7, majf=0, minf=9 00:37:27.755 IO depths : 1=0.7%, 2=10.8%, 4=62.3%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 issued rwts: total=12992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.755 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.755 filename1: (groupid=0, jobs=1): err= 0: pid=416085: Tue Dec 10 06:02:45 2024 00:37:27.755 read: IOPS=2677, BW=20.9MiB/s (21.9MB/s)(105MiB/5001msec) 00:37:27.755 slat (nsec): min=6061, max=42264, avg=10837.44, stdev=4274.78 00:37:27.755 clat (usec): min=605, max=5654, avg=2952.79, stdev=380.47 00:37:27.755 lat (usec): min=617, max=5661, avg=2963.63, stdev=380.60 00:37:27.755 clat percentiles (usec): 00:37:27.755 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2769], 00:37:27.755 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:37:27.755 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3523], 00:37:27.755 | 99.00th=[ 4293], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5407], 00:37:27.755 | 99.99th=[ 5473] 00:37:27.755 bw ( KiB/s): min=20192, max=22268, per=25.38%, avg=21414.00, stdev=625.15, samples=10 00:37:27.755 iops : min= 2524, max= 2783, avg=2676.70, stdev=78.07, samples=10 00:37:27.755 lat (usec) : 750=0.04%, 1000=0.04% 00:37:27.755 lat (msec) : 2=1.28%, 4=96.67%, 10=1.96% 00:37:27.755 cpu : usr=95.54%, sys=4.14%, ctx=11, majf=0, minf=9 00:37:27.755 IO depths : 1=0.5%, 2=10.9%, 4=61.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.755 issued rwts: total=13389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.755 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.755 00:37:27.755 Run status group 0 (all jobs): 00:37:27.755 READ: bw=82.4MiB/s (86.4MB/s), 20.3MiB/s-20.9MiB/s (21.3MB/s-21.9MB/s), io=412MiB (432MB), run=5001-5002msec 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.755 00:37:27.755 real 0m24.506s 00:37:27.755 user 4m52.471s 00:37:27.755 sys 0m5.378s 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.755 06:02:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.755 ************************************ 00:37:27.755 END TEST fio_dif_rand_params 00:37:27.755 ************************************ 00:37:27.755 06:02:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:27.755 06:02:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:27.755 06:02:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.755 06:02:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:27.755 ************************************ 00:37:27.755 START TEST fio_dif_digest 00:37:27.755 ************************************ 00:37:27.755 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:27.755 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:27.756 bdev_null0 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.756 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.015 [2024-12-10 06:02:45.720152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.015 { 00:37:28.015 "params": { 00:37:28.015 "name": "Nvme$subsystem", 00:37:28.015 "trtype": "$TEST_TRANSPORT", 00:37:28.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.015 "adrfam": "ipv4", 00:37:28.015 "trsvcid": "$NVMF_PORT", 00:37:28.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.015 "hdgst": ${hdgst:-false}, 00:37:28.015 "ddgst": ${ddgst:-false} 00:37:28.015 }, 00:37:28.015 "method": "bdev_nvme_attach_controller" 00:37:28.015 } 00:37:28.015 EOF 00:37:28.015 )") 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:28.015 06:02:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.015 "params": { 00:37:28.015 "name": "Nvme0", 00:37:28.015 "trtype": "tcp", 00:37:28.015 "traddr": "10.0.0.2", 00:37:28.015 "adrfam": "ipv4", 00:37:28.015 "trsvcid": "4420", 00:37:28.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.016 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.016 "hdgst": true, 00:37:28.016 "ddgst": true 00:37:28.016 }, 00:37:28.016 "method": "bdev_nvme_attach_controller" 00:37:28.016 }' 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:28.016 06:02:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.274 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:28.274 ... 00:37:28.274 fio-3.35 00:37:28.274 Starting 3 threads 00:37:40.482 00:37:40.482 filename0: (groupid=0, jobs=1): err= 0: pid=417230: Tue Dec 10 06:02:56 2024 00:37:40.482 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(364MiB/10007msec) 00:37:40.482 slat (nsec): min=6367, max=28542, avg=11453.50, stdev=1803.06 00:37:40.482 clat (usec): min=6723, max=13330, avg=10287.46, stdev=806.47 00:37:40.482 lat (usec): min=6735, max=13340, avg=10298.92, stdev=806.42 00:37:40.482 clat percentiles (usec): 00:37:40.482 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:37:40.482 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:37:40.482 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:37:40.482 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12911], 99.95th=[13042], 00:37:40.482 | 99.99th=[13304] 00:37:40.482 bw ( KiB/s): min=35584, max=38400, per=35.30%, avg=37273.60, stdev=676.80, samples=20 00:37:40.482 iops : min= 278, max= 300, avg=291.20, stdev= 5.29, samples=20 00:37:40.482 lat (msec) : 10=34.69%, 20=65.31% 00:37:40.482 cpu : usr=94.56%, sys=5.14%, ctx=18, majf=0, minf=86 00:37:40.482 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.482 issued rwts: total=2914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.482 filename0: (groupid=0, jobs=1): err= 0: pid=417231: Tue Dec 10 06:02:56 2024 00:37:40.482 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(329MiB/10045msec) 00:37:40.482 slat (nsec): min=6411, max=31699, avg=11318.91, stdev=1638.40 00:37:40.482 clat (usec): min=6916, max=51946, avg=11407.73, stdev=1844.58 00:37:40.482 lat (usec): min=6928, max=51958, avg=11419.05, stdev=1844.55 00:37:40.482 clat percentiles (usec): 00:37:40.482 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:37:40.482 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:37:40.482 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:37:40.482 | 99.00th=[13304], 99.50th=[13698], 99.90th=[51119], 99.95th=[51643], 00:37:40.482 | 99.99th=[52167] 00:37:40.482 bw ( KiB/s): min=30976, max=34560, per=31.90%, avg=33689.60, stdev=780.92, samples=20 00:37:40.482 iops : min= 242, max= 270, avg=263.20, stdev= 6.10, samples=20 00:37:40.482 lat (msec) : 10=3.98%, 20=95.83%, 50=0.08%, 100=0.11% 00:37:40.482 cpu : usr=94.16%, sys=5.54%, ctx=15, majf=0, minf=57 00:37:40.482 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.482 issued rwts: total=2635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.482 filename0: (groupid=0, jobs=1): err= 0: pid=417232: Tue Dec 10 06:02:56 2024 00:37:40.482 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(342MiB/10046msec) 00:37:40.482 slat (nsec): min=6327, max=28895, avg=11197.04, stdev=1754.68 00:37:40.482 clat (usec): min=6341, max=50302, avg=10974.76, stdev=1821.02 00:37:40.482 lat (usec): min=6352, max=50331, avg=10985.96, stdev=1821.27 00:37:40.482 clat percentiles (usec): 00:37:40.482 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:37:40.482 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:37:40.482 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:37:40.482 | 99.00th=[12780], 99.50th=[13042], 99.90th=[50070], 99.95th=[50070], 00:37:40.482 | 99.99th=[50070] 00:37:40.482 bw ( KiB/s): min=31488, max=36608, per=33.16%, avg=35020.80, stdev=1049.29, samples=20 00:37:40.482 iops : min= 246, max= 286, avg=273.60, stdev= 8.20, samples=20 00:37:40.482 lat (msec) : 10=11.03%, 20=88.79%, 50=0.07%, 100=0.11% 00:37:40.482 cpu : usr=94.90%, sys=4.79%, ctx=16, majf=0, minf=98 00:37:40.482 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.482 issued rwts: total=2739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.482 00:37:40.482 Run status group 0 (all jobs): 00:37:40.483 READ: bw=103MiB/s (108MB/s), 32.8MiB/s-36.4MiB/s (34.4MB/s-38.2MB/s), io=1036MiB (1086MB), run=10007-10046msec 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.483 00:37:40.483 real 0m11.212s 00:37:40.483 user 0m35.069s 00:37:40.483 sys 0m1.836s 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.483 06:02:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.483 ************************************ 00:37:40.483 END TEST fio_dif_digest 00:37:40.483 ************************************ 00:37:40.483 06:02:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:40.483 06:02:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.483 rmmod nvme_tcp 00:37:40.483 rmmod nvme_fabrics 00:37:40.483 rmmod nvme_keyring 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 408842 ']' 00:37:40.483 06:02:56 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 408842 00:37:40.483 06:02:56 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 408842 ']' 00:37:40.483 06:02:56 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 408842 00:37:40.483 06:02:56 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:40.483 06:02:56 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:40.483 06:02:56 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408842 00:37:40.483 06:02:57 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:40.483 06:02:57 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:40.483 06:02:57 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408842' 00:37:40.483 killing process with pid 408842 00:37:40.483 06:02:57 nvmf_dif -- common/autotest_common.sh@973 -- # kill 408842 00:37:40.483 06:02:57 nvmf_dif -- common/autotest_common.sh@978 -- # wait 408842 00:37:40.483 06:02:57 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:40.483 06:02:57 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:42.389 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:42.649 Waiting for block devices as requested 00:37:42.649 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:42.649 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:42.908 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:42.908 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:42.908 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:43.167 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:43.167 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:43.167 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:43.167 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:43.426 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:43.426 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:43.426 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:43.685 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:43.685 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:43.685 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:43.944 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:43.944 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:43.944 06:03:01 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.944 06:03:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:43.944 06:03:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.479 06:03:03 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:46.479 00:37:46.479 real 1m16.342s 00:37:46.479 user 7m10.735s 00:37:46.479 sys 0m22.302s 00:37:46.479 06:03:03 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.479 06:03:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.479 ************************************ 00:37:46.479 END TEST nvmf_dif 00:37:46.479 ************************************ 00:37:46.479 06:03:03 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:46.479 06:03:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:46.479 06:03:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.479 06:03:03 -- common/autotest_common.sh@10 -- # set +x 00:37:46.479 ************************************ 00:37:46.479 START TEST nvmf_abort_qd_sizes 00:37:46.479 ************************************ 00:37:46.479 06:03:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:46.479 * Looking for test storage... 00:37:46.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:46.479 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.480 --rc genhtml_branch_coverage=1 00:37:46.480 --rc genhtml_function_coverage=1 00:37:46.480 --rc genhtml_legend=1 00:37:46.480 --rc geninfo_all_blocks=1 00:37:46.480 --rc geninfo_unexecuted_blocks=1 00:37:46.480 00:37:46.480 ' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.480 --rc genhtml_branch_coverage=1 00:37:46.480 --rc genhtml_function_coverage=1 00:37:46.480 --rc genhtml_legend=1 00:37:46.480 --rc geninfo_all_blocks=1 00:37:46.480 --rc geninfo_unexecuted_blocks=1 00:37:46.480 00:37:46.480 ' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.480 --rc genhtml_branch_coverage=1 00:37:46.480 --rc genhtml_function_coverage=1 00:37:46.480 --rc genhtml_legend=1 00:37:46.480 --rc geninfo_all_blocks=1 00:37:46.480 --rc geninfo_unexecuted_blocks=1 00:37:46.480 00:37:46.480 ' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.480 --rc genhtml_branch_coverage=1 00:37:46.480 --rc genhtml_function_coverage=1 00:37:46.480 --rc genhtml_legend=1 00:37:46.480 --rc geninfo_all_blocks=1 00:37:46.480 --rc geninfo_unexecuted_blocks=1 00:37:46.480 00:37:46.480 ' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:46.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:46.480 06:03:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:53.044 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:53.044 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.044 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:53.045 Found net devices under 0000:af:00.0: cvl_0_0 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:53.045 Found net devices under 0000:af:00.1: cvl_0_1 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:53.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:53.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:37:53.045 00:37:53.045 --- 10.0.0.2 ping statistics --- 00:37:53.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.045 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:53.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:53.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:37:53.045 00:37:53.045 --- 10.0.0.1 ping statistics --- 00:37:53.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.045 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:53.045 06:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:56.441 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:56.441 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:56.441 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:57.377 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=426357 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 426357 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 426357 ']' 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:57.377 06:03:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:57.377 [2024-12-10 06:03:15.215534] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:37:57.377 [2024-12-10 06:03:15.215576] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:57.377 [2024-12-10 06:03:15.301322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:57.633 [2024-12-10 06:03:15.343835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.633 [2024-12-10 06:03:15.343872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.633 [2024-12-10 06:03:15.343881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.633 [2024-12-10 06:03:15.343887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.633 [2024-12-10 06:03:15.343892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.633 [2024-12-10 06:03:15.345374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.633 [2024-12-10 06:03:15.345484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.633 [2024-12-10 06:03:15.345512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.633 [2024-12-10 06:03:15.345513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.196 06:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:58.196 ************************************ 00:37:58.196 START TEST spdk_target_abort 00:37:58.196 ************************************ 00:37:58.196 06:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:58.196 06:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:58.196 06:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:37:58.196 06:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.196 06:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.468 spdk_targetn1 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.468 [2024-12-10 06:03:18.972664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.468 06:03:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.468 [2024-12-10 06:03:19.020975] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:01.468 06:03:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:04.739 Initializing NVMe Controllers 00:38:04.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:04.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:04.740 Initialization complete. Launching workers. 00:38:04.740 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15085, failed: 0 00:38:04.740 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1358, failed to submit 13727 00:38:04.740 success 698, unsuccessful 660, failed 0 00:38:04.740 06:03:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:04.740 06:03:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:08.019 Initializing NVMe Controllers 00:38:08.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:08.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:08.019 Initialization complete. Launching workers. 00:38:08.019 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8772, failed: 0 00:38:08.019 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7537 00:38:08.019 success 363, unsuccessful 872, failed 0 00:38:08.019 06:03:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:08.019 06:03:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:11.299 Initializing NVMe Controllers 00:38:11.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:11.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:11.299 Initialization complete. Launching workers. 00:38:11.299 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38124, failed: 0 00:38:11.299 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2780, failed to submit 35344 00:38:11.299 success 571, unsuccessful 2209, failed 0 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.299 06:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 426357 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 426357 ']' 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 426357 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426357 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426357' 00:38:12.233 killing process with pid 426357 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 426357 00:38:12.233 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 426357 00:38:12.492 00:38:12.492 real 0m14.095s 00:38:12.492 user 0m56.201s 00:38:12.492 sys 0m2.579s 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.492 ************************************ 00:38:12.492 END TEST spdk_target_abort 00:38:12.492 ************************************ 00:38:12.492 06:03:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:12.492 06:03:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:12.492 06:03:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:12.492 06:03:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:12.492 ************************************ 00:38:12.492 START TEST kernel_target_abort 00:38:12.492 ************************************ 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:12.492 06:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:15.782 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:38:15.782 Waiting for block devices as requested 00:38:15.782 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:15.782 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:16.041 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:16.041 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:16.041 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:16.300 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:16.300 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:16.300 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:16.559 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:16.559 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:16.559 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:16.559 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:16.818 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:16.818 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:16.818 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:17.076 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:17.076 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:17.076 06:03:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:17.076 No valid GPT data, bailing 00:38:17.076 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:17.076 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:38:17.341 No valid GPT data, bailing 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # continue 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:17.341 00:38:17.341 Discovery Log Number of Records 2, Generation counter 2 00:38:17.341 =====Discovery Log Entry 0====== 00:38:17.341 trtype: tcp 00:38:17.341 adrfam: ipv4 00:38:17.341 subtype: current discovery subsystem 00:38:17.341 treq: not specified, sq flow control disable supported 00:38:17.341 portid: 1 00:38:17.341 trsvcid: 4420 00:38:17.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:17.341 traddr: 10.0.0.1 00:38:17.341 eflags: none 00:38:17.341 sectype: none 00:38:17.341 =====Discovery Log Entry 1====== 00:38:17.341 trtype: tcp 00:38:17.341 adrfam: ipv4 00:38:17.341 subtype: nvme subsystem 00:38:17.341 treq: not specified, sq flow control disable supported 00:38:17.341 portid: 1 00:38:17.341 trsvcid: 4420 00:38:17.341 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:17.341 traddr: 10.0.0.1 00:38:17.341 eflags: none 00:38:17.341 sectype: none 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:17.341 06:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:20.624 Initializing NVMe Controllers 00:38:20.624 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:20.624 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:20.624 Initialization complete. Launching workers. 00:38:20.624 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76440, failed: 0 00:38:20.624 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 76440, failed to submit 0 00:38:20.624 success 0, unsuccessful 76440, failed 0 00:38:20.624 06:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:20.624 06:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.907 Initializing NVMe Controllers 00:38:23.907 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:23.907 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:23.907 Initialization complete. Launching workers. 00:38:23.907 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139160, failed: 0 00:38:23.907 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32366, failed to submit 106794 00:38:23.907 success 0, unsuccessful 32366, failed 0 00:38:23.907 06:03:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:23.907 06:03:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.193 Initializing NVMe Controllers 00:38:27.193 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:27.193 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:27.193 Initialization complete. Launching workers. 00:38:27.193 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130684, failed: 0 00:38:27.193 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32686, failed to submit 97998 00:38:27.193 success 0, unsuccessful 32686, failed 0 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:27.193 06:03:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:29.728 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:38:29.987 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:29.987 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:29.987 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:29.987 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:29.987 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:29.987 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:29.987 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:30.246 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:30.246 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:30.246 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:30.246 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:30.247 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:30.247 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:30.247 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:30.247 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:30.247 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:31.184 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:31.184 00:38:31.184 real 0m18.638s 00:38:31.184 user 0m9.111s 00:38:31.184 sys 0m5.891s 00:38:31.184 06:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.184 06:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.184 ************************************ 00:38:31.184 END TEST kernel_target_abort 00:38:31.184 ************************************ 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:31.184 06:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:31.184 rmmod nvme_tcp 00:38:31.184 rmmod nvme_fabrics 00:38:31.184 rmmod nvme_keyring 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 426357 ']' 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 426357 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 426357 ']' 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 426357 00:38:31.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (426357) - No such process 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 426357 is not found' 00:38:31.184 Process with pid 426357 is not found 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:31.184 06:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:34.474 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:38:34.474 Waiting for block devices as requested 00:38:34.474 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:34.474 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:34.733 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:34.733 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:34.733 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:34.993 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:34.993 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:34.993 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:34.993 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:35.252 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:35.252 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:35.252 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:35.511 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:35.511 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:35.511 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:35.511 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:35.770 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:35.770 06:03:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.305 06:03:55 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.305 00:38:38.305 real 0m51.718s 00:38:38.305 user 1m10.521s 00:38:38.305 sys 0m18.332s 00:38:38.305 06:03:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.305 06:03:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:38.305 ************************************ 00:38:38.305 END TEST nvmf_abort_qd_sizes 00:38:38.305 ************************************ 00:38:38.305 06:03:55 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:38.305 06:03:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:38.305 06:03:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.305 06:03:55 -- common/autotest_common.sh@10 -- # set +x 00:38:38.305 ************************************ 00:38:38.305 START TEST keyring_file 00:38:38.305 ************************************ 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:38.305 * Looking for test storage... 00:38:38.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.305 06:03:55 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:38.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.305 --rc genhtml_branch_coverage=1 00:38:38.305 --rc genhtml_function_coverage=1 00:38:38.305 --rc genhtml_legend=1 00:38:38.305 --rc geninfo_all_blocks=1 00:38:38.305 --rc geninfo_unexecuted_blocks=1 00:38:38.305 00:38:38.305 ' 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:38.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.305 --rc genhtml_branch_coverage=1 00:38:38.305 --rc genhtml_function_coverage=1 00:38:38.305 --rc genhtml_legend=1 00:38:38.305 --rc geninfo_all_blocks=1 00:38:38.305 --rc geninfo_unexecuted_blocks=1 00:38:38.305 00:38:38.305 ' 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:38.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.305 --rc genhtml_branch_coverage=1 00:38:38.305 --rc genhtml_function_coverage=1 00:38:38.305 --rc genhtml_legend=1 00:38:38.305 --rc geninfo_all_blocks=1 00:38:38.305 --rc geninfo_unexecuted_blocks=1 00:38:38.305 00:38:38.305 ' 00:38:38.305 06:03:55 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:38.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.305 --rc genhtml_branch_coverage=1 00:38:38.305 --rc genhtml_function_coverage=1 00:38:38.305 --rc genhtml_legend=1 00:38:38.305 --rc geninfo_all_blocks=1 00:38:38.305 --rc geninfo_unexecuted_blocks=1 00:38:38.305 00:38:38.305 ' 00:38:38.305 06:03:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:38.305 06:03:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.305 06:03:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.306 06:03:55 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:38.306 06:03:55 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.306 06:03:55 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.306 06:03:55 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.306 06:03:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.306 06:03:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.306 06:03:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.306 06:03:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:38.306 06:03:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:38.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:38.306 06:03:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:38.306 06:03:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:38.306 06:03:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:38.306 06:03:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:38.306 06:03:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:38.306 06:03:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HVIdZKdvHM 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:38.306 06:03:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:38.306 06:03:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HVIdZKdvHM 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HVIdZKdvHM 00:38:38.306 06:03:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HVIdZKdvHM 00:38:38.306 06:03:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NENw5QNZPf 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:38.306 06:03:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:38.306 06:03:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:38.306 06:03:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:38.306 06:03:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:38.306 06:03:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:38.306 06:03:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NENw5QNZPf 00:38:38.306 06:03:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NENw5QNZPf 00:38:38.306 06:03:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NENw5QNZPf 00:38:38.306 06:03:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=435631 00:38:38.306 06:03:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:38.306 06:03:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 435631 00:38:38.306 06:03:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 435631 ']' 00:38:38.306 06:03:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.306 06:03:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:38.306 06:03:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.306 06:03:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:38.306 06:03:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:38.306 [2024-12-10 06:03:56.107837] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:38:38.306 [2024-12-10 06:03:56.107888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435631 ] 00:38:38.306 [2024-12-10 06:03:56.188731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.306 [2024-12-10 06:03:56.229158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:38.565 06:03:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:38.565 [2024-12-10 06:03:56.452290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.565 null0 00:38:38.565 [2024-12-10 06:03:56.484339] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:38.565 [2024-12-10 06:03:56.484634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.565 06:03:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:38.565 [2024-12-10 06:03:56.512402] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:38.565 request: 00:38:38.565 { 00:38:38.565 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:38.565 "secure_channel": false, 00:38:38.565 "listen_address": { 00:38:38.565 "trtype": "tcp", 00:38:38.565 "traddr": "127.0.0.1", 00:38:38.565 "trsvcid": "4420" 00:38:38.565 }, 00:38:38.565 "method": "nvmf_subsystem_add_listener", 00:38:38.565 "req_id": 1 00:38:38.565 } 00:38:38.565 Got JSON-RPC error response 00:38:38.565 response: 00:38:38.565 { 00:38:38.565 "code": -32602, 00:38:38.565 "message": "Invalid parameters" 00:38:38.565 } 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:38.565 06:03:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:38.824 06:03:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=435817 00:38:38.824 06:03:56 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:38.824 06:03:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 435817 /var/tmp/bperf.sock 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 435817 ']' 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:38.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:38.824 [2024-12-10 06:03:56.566097] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:38:38.824 [2024-12-10 06:03:56.566140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435817 ] 00:38:38.824 [2024-12-10 06:03:56.645023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.824 [2024-12-10 06:03:56.685962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:38.824 06:03:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:38.824 06:03:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:38.824 06:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:39.083 06:03:56 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NENw5QNZPf 00:38:39.083 06:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NENw5QNZPf 00:38:39.342 06:03:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:39.342 06:03:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:39.342 06:03:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.342 06:03:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:39.342 06:03:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.601 06:03:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.HVIdZKdvHM == \/\t\m\p\/\t\m\p\.\H\V\I\d\Z\K\d\v\H\M ]] 00:38:39.601 06:03:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:39.601 06:03:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.601 06:03:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.NENw5QNZPf == \/\t\m\p\/\t\m\p\.\N\E\N\w\5\Q\N\Z\P\f ]] 00:38:39.601 06:03:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.601 06:03:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.860 06:03:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:39.860 06:03:57 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:39.860 06:03:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:39.860 06:03:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.860 06:03:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.860 06:03:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:39.860 06:03:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.119 06:03:57 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:40.119 06:03:57 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:40.119 06:03:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:40.378 [2024-12-10 06:03:58.128034] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:40.378 nvme0n1 00:38:40.378 06:03:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:40.378 06:03:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.378 06:03:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.378 06:03:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.378 06:03:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.378 06:03:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.637 06:03:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:40.637 06:03:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:40.637 06:03:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:40.637 06:03:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.637 06:03:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.637 06:03:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:40.637 06:03:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.895 06:03:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:40.895 06:03:58 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:40.895 Running I/O for 1 seconds... 00:38:41.830 19235.00 IOPS, 75.14 MiB/s 00:38:41.830 Latency(us) 00:38:41.830 [2024-12-10T05:03:59.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.830 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:41.830 nvme0n1 : 1.00 19281.36 75.32 0.00 0.00 6626.67 2652.65 17601.10 00:38:41.830 [2024-12-10T05:03:59.789Z] =================================================================================================================== 00:38:41.830 [2024-12-10T05:03:59.789Z] Total : 19281.36 75.32 0.00 0.00 6626.67 2652.65 17601.10 00:38:41.830 { 00:38:41.830 "results": [ 00:38:41.830 { 00:38:41.830 "job": "nvme0n1", 00:38:41.830 "core_mask": "0x2", 00:38:41.830 "workload": "randrw", 00:38:41.830 "percentage": 50, 00:38:41.830 "status": "finished", 00:38:41.830 "queue_depth": 128, 00:38:41.830 "io_size": 4096, 00:38:41.830 "runtime": 1.004286, 00:38:41.830 "iops": 19281.36009065147, 00:38:41.830 "mibps": 75.3178128541073, 00:38:41.830 "io_failed": 0, 00:38:41.830 "io_timeout": 0, 00:38:41.830 "avg_latency_us": 6626.672549748675, 00:38:41.830 "min_latency_us": 2652.647619047619, 00:38:41.830 "max_latency_us": 17601.097142857143 00:38:41.830 } 00:38:41.830 ], 00:38:41.830 "core_count": 1 00:38:41.830 } 00:38:41.830 06:03:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:41.830 06:03:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:42.089 06:03:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:42.089 06:03:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.089 06:03:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.089 06:03:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.089 06:03:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.089 06:03:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.347 06:04:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:42.347 06:04:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:42.347 06:04:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:42.347 06:04:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.347 06:04:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.347 06:04:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:42.348 06:04:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.606 06:04:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:42.606 06:04:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:42.606 06:04:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:42.606 06:04:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:42.607 06:04:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:42.607 06:04:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:42.607 06:04:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:42.607 06:04:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:42.607 06:04:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:42.607 06:04:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:42.607 [2024-12-10 06:04:00.546720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:42.607 [2024-12-10 06:04:00.546856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e07d0 (107): Transport endpoint is not connected 00:38:42.607 [2024-12-10 06:04:00.547851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e07d0 (9): Bad file descriptor 00:38:42.607 [2024-12-10 06:04:00.548852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:42.607 [2024-12-10 06:04:00.548863] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:42.607 [2024-12-10 06:04:00.548870] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:42.607 [2024-12-10 06:04:00.548878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:42.607 request: 00:38:42.607 { 00:38:42.607 "name": "nvme0", 00:38:42.607 "trtype": "tcp", 00:38:42.607 "traddr": "127.0.0.1", 00:38:42.607 "adrfam": "ipv4", 00:38:42.607 "trsvcid": "4420", 00:38:42.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:42.607 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:42.607 "prchk_reftag": false, 00:38:42.607 "prchk_guard": false, 00:38:42.607 "hdgst": false, 00:38:42.607 "ddgst": false, 00:38:42.607 "psk": "key1", 00:38:42.607 "allow_unrecognized_csi": false, 00:38:42.607 "method": "bdev_nvme_attach_controller", 00:38:42.607 "req_id": 1 00:38:42.607 } 00:38:42.607 Got JSON-RPC error response 00:38:42.607 response: 00:38:42.607 { 00:38:42.607 "code": -5, 00:38:42.607 "message": "Input/output error" 00:38:42.607 } 00:38:42.866 06:04:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:42.866 06:04:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:42.866 06:04:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:42.866 06:04:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:42.866 06:04:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.866 06:04:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:42.866 06:04:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:42.866 06:04:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.125 06:04:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:43.125 06:04:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:43.125 06:04:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:43.384 06:04:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:43.384 06:04:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:43.642 06:04:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:43.642 06:04:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:43.642 06:04:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.902 06:04:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:43.902 06:04:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.HVIdZKdvHM 00:38:43.902 06:04:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:43.902 06:04:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:43.902 [2024-12-10 06:04:01.795223] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HVIdZKdvHM': 0100660 00:38:43.902 [2024-12-10 06:04:01.795248] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:43.902 request: 00:38:43.902 { 00:38:43.902 "name": "key0", 00:38:43.902 "path": "/tmp/tmp.HVIdZKdvHM", 00:38:43.902 "method": "keyring_file_add_key", 00:38:43.902 "req_id": 1 00:38:43.902 } 00:38:43.902 Got JSON-RPC error response 00:38:43.902 response: 00:38:43.902 { 00:38:43.902 "code": -1, 00:38:43.902 "message": "Operation not permitted" 00:38:43.902 } 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:43.902 06:04:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:43.902 06:04:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.HVIdZKdvHM 00:38:43.902 06:04:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:43.902 06:04:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HVIdZKdvHM 00:38:44.161 06:04:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.HVIdZKdvHM 00:38:44.161 06:04:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:44.161 06:04:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:44.161 06:04:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:44.161 06:04:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.161 06:04:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:44.161 06:04:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.419 06:04:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:44.419 06:04:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:44.419 06:04:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.419 06:04:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.419 [2024-12-10 06:04:02.372756] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HVIdZKdvHM': No such file or directory 00:38:44.419 [2024-12-10 06:04:02.372785] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:44.419 [2024-12-10 06:04:02.372801] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:44.419 [2024-12-10 06:04:02.372808] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:44.419 [2024-12-10 06:04:02.372815] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:44.419 [2024-12-10 06:04:02.372821] bdev_nvme.c:6795:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:44.678 request: 00:38:44.678 { 00:38:44.678 "name": "nvme0", 00:38:44.678 "trtype": "tcp", 00:38:44.678 "traddr": "127.0.0.1", 00:38:44.678 "adrfam": "ipv4", 00:38:44.678 "trsvcid": "4420", 00:38:44.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.678 "prchk_reftag": false, 00:38:44.678 "prchk_guard": false, 00:38:44.678 "hdgst": false, 00:38:44.678 "ddgst": false, 00:38:44.678 "psk": "key0", 00:38:44.678 "allow_unrecognized_csi": false, 00:38:44.678 "method": "bdev_nvme_attach_controller", 00:38:44.678 "req_id": 1 00:38:44.678 } 00:38:44.678 Got JSON-RPC error response 00:38:44.678 response: 00:38:44.678 { 00:38:44.678 "code": -19, 00:38:44.678 "message": "No such device" 00:38:44.678 } 00:38:44.678 06:04:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:44.678 06:04:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:44.678 06:04:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:44.678 06:04:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:44.678 06:04:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:44.678 06:04:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YjCuI4O4aJ 00:38:44.678 06:04:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:44.678 06:04:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:44.678 06:04:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:44.678 06:04:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:44.678 06:04:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:44.678 06:04:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:44.678 06:04:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:44.937 06:04:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YjCuI4O4aJ 00:38:44.937 06:04:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YjCuI4O4aJ 00:38:44.937 06:04:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.YjCuI4O4aJ 00:38:44.937 06:04:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YjCuI4O4aJ 00:38:44.937 06:04:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YjCuI4O4aJ 00:38:44.937 06:04:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.937 06:04:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:45.195 nvme0n1 00:38:45.195 06:04:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:45.195 06:04:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.195 06:04:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.195 06:04:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.196 06:04:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.196 06:04:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.454 06:04:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:45.454 06:04:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:45.454 06:04:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:45.712 06:04:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:45.712 06:04:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:45.712 06:04:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.712 06:04:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.712 06:04:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.971 06:04:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:45.971 06:04:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:45.971 06:04:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.971 06:04:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.971 06:04:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.971 06:04:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.971 06:04:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.971 06:04:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:45.971 06:04:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:45.971 06:04:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:46.229 06:04:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:46.229 06:04:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:46.229 06:04:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.488 06:04:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:46.488 06:04:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YjCuI4O4aJ 00:38:46.488 06:04:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YjCuI4O4aJ 00:38:46.746 06:04:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NENw5QNZPf 00:38:46.746 06:04:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NENw5QNZPf 00:38:46.746 06:04:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:46.746 06:04:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:47.005 nvme0n1 00:38:47.005 06:04:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:47.005 06:04:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:47.271 06:04:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:47.271 "subsystems": [ 00:38:47.271 { 00:38:47.271 "subsystem": "keyring", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "keyring_file_add_key", 00:38:47.271 "params": { 00:38:47.271 "name": "key0", 00:38:47.271 "path": "/tmp/tmp.YjCuI4O4aJ" 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "keyring_file_add_key", 00:38:47.271 "params": { 00:38:47.271 "name": "key1", 00:38:47.271 "path": "/tmp/tmp.NENw5QNZPf" 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "iobuf", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "iobuf_set_options", 00:38:47.271 "params": { 00:38:47.271 "small_pool_count": 8192, 00:38:47.271 "large_pool_count": 1024, 00:38:47.271 "small_bufsize": 8192, 00:38:47.271 "large_bufsize": 135168, 00:38:47.271 "enable_numa": false 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "sock", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "sock_set_default_impl", 00:38:47.271 "params": { 00:38:47.271 "impl_name": "posix" 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "sock_impl_set_options", 00:38:47.271 "params": { 00:38:47.271 "impl_name": "ssl", 00:38:47.271 "recv_buf_size": 4096, 00:38:47.271 "send_buf_size": 4096, 00:38:47.271 "enable_recv_pipe": true, 00:38:47.271 "enable_quickack": false, 00:38:47.271 "enable_placement_id": 0, 00:38:47.271 "enable_zerocopy_send_server": true, 00:38:47.271 "enable_zerocopy_send_client": false, 00:38:47.271 "zerocopy_threshold": 0, 00:38:47.271 "tls_version": 0, 00:38:47.271 "enable_ktls": false 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "sock_impl_set_options", 00:38:47.271 "params": { 00:38:47.271 "impl_name": "posix", 00:38:47.271 "recv_buf_size": 2097152, 00:38:47.271 "send_buf_size": 2097152, 00:38:47.271 "enable_recv_pipe": true, 00:38:47.271 "enable_quickack": false, 00:38:47.271 "enable_placement_id": 0, 00:38:47.271 "enable_zerocopy_send_server": true, 00:38:47.271 "enable_zerocopy_send_client": false, 00:38:47.271 "zerocopy_threshold": 0, 00:38:47.271 "tls_version": 0, 00:38:47.271 "enable_ktls": false 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "vmd", 00:38:47.271 "config": [] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "accel", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "accel_set_options", 00:38:47.271 "params": { 00:38:47.271 "small_cache_size": 128, 00:38:47.271 "large_cache_size": 16, 00:38:47.271 "task_count": 2048, 00:38:47.271 "sequence_count": 2048, 00:38:47.271 "buf_count": 2048 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "bdev", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "bdev_set_options", 00:38:47.271 "params": { 00:38:47.271 "bdev_io_pool_size": 65535, 00:38:47.271 "bdev_io_cache_size": 256, 00:38:47.271 "bdev_auto_examine": true, 00:38:47.271 "iobuf_small_cache_size": 128, 00:38:47.271 "iobuf_large_cache_size": 16 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_raid_set_options", 00:38:47.271 "params": { 00:38:47.271 "process_window_size_kb": 1024, 00:38:47.271 "process_max_bandwidth_mb_sec": 0 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "bdev_iscsi_set_options", 00:38:47.272 "params": { 00:38:47.272 "timeout_sec": 30 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "bdev_nvme_set_options", 00:38:47.272 "params": { 00:38:47.272 "action_on_timeout": "none", 00:38:47.272 "timeout_us": 0, 00:38:47.272 "timeout_admin_us": 0, 00:38:47.272 "keep_alive_timeout_ms": 10000, 00:38:47.272 "arbitration_burst": 0, 00:38:47.272 "low_priority_weight": 0, 00:38:47.272 "medium_priority_weight": 0, 00:38:47.272 "high_priority_weight": 0, 00:38:47.272 "nvme_adminq_poll_period_us": 10000, 00:38:47.272 "nvme_ioq_poll_period_us": 0, 00:38:47.272 "io_queue_requests": 512, 00:38:47.272 "delay_cmd_submit": true, 00:38:47.272 "transport_retry_count": 4, 00:38:47.272 "bdev_retry_count": 3, 00:38:47.272 "transport_ack_timeout": 0, 00:38:47.272 "ctrlr_loss_timeout_sec": 0, 00:38:47.272 "reconnect_delay_sec": 0, 00:38:47.272 "fast_io_fail_timeout_sec": 0, 00:38:47.272 "disable_auto_failback": false, 00:38:47.272 "generate_uuids": false, 00:38:47.272 "transport_tos": 0, 00:38:47.272 "nvme_error_stat": false, 00:38:47.272 "rdma_srq_size": 0, 00:38:47.272 "io_path_stat": false, 00:38:47.272 "allow_accel_sequence": false, 00:38:47.272 "rdma_max_cq_size": 0, 00:38:47.272 "rdma_cm_event_timeout_ms": 0, 00:38:47.272 "dhchap_digests": [ 00:38:47.272 "sha256", 00:38:47.272 "sha384", 00:38:47.272 "sha512" 00:38:47.272 ], 00:38:47.272 "dhchap_dhgroups": [ 00:38:47.272 "null", 00:38:47.272 "ffdhe2048", 00:38:47.272 "ffdhe3072", 00:38:47.272 "ffdhe4096", 00:38:47.272 "ffdhe6144", 00:38:47.272 "ffdhe8192" 00:38:47.272 ], 00:38:47.272 "rdma_umr_per_io": false 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "bdev_nvme_attach_controller", 00:38:47.272 "params": { 00:38:47.272 "name": "nvme0", 00:38:47.272 "trtype": "TCP", 00:38:47.272 "adrfam": "IPv4", 00:38:47.272 "traddr": "127.0.0.1", 00:38:47.272 "trsvcid": "4420", 00:38:47.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:47.272 "prchk_reftag": false, 00:38:47.272 "prchk_guard": false, 00:38:47.272 "ctrlr_loss_timeout_sec": 0, 00:38:47.272 "reconnect_delay_sec": 0, 00:38:47.272 "fast_io_fail_timeout_sec": 0, 00:38:47.272 "psk": "key0", 00:38:47.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:47.272 "hdgst": false, 00:38:47.272 "ddgst": false, 00:38:47.272 "multipath": "multipath" 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "bdev_nvme_set_hotplug", 00:38:47.272 "params": { 00:38:47.272 "period_us": 100000, 00:38:47.272 "enable": false 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "bdev_wait_for_examine" 00:38:47.272 } 00:38:47.272 ] 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "subsystem": "nbd", 00:38:47.272 "config": [] 00:38:47.272 } 00:38:47.272 ] 00:38:47.272 }' 00:38:47.272 06:04:05 keyring_file -- keyring/file.sh@115 -- # killprocess 435817 00:38:47.272 06:04:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 435817 ']' 00:38:47.272 06:04:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 435817 00:38:47.272 06:04:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:47.272 06:04:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.272 06:04:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435817 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435817' 00:38:47.531 killing process with pid 435817 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@973 -- # kill 435817 00:38:47.531 Received shutdown signal, test time was about 1.000000 seconds 00:38:47.531 00:38:47.531 Latency(us) 00:38:47.531 [2024-12-10T05:04:05.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.531 [2024-12-10T05:04:05.490Z] =================================================================================================================== 00:38:47.531 [2024-12-10T05:04:05.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@978 -- # wait 435817 00:38:47.531 06:04:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=437356 00:38:47.531 06:04:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 437356 /var/tmp/bperf.sock 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 437356 ']' 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:47.531 06:04:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.531 06:04:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:47.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:47.531 06:04:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:47.531 "subsystems": [ 00:38:47.531 { 00:38:47.531 "subsystem": "keyring", 00:38:47.531 "config": [ 00:38:47.531 { 00:38:47.531 "method": "keyring_file_add_key", 00:38:47.531 "params": { 00:38:47.531 "name": "key0", 00:38:47.531 "path": "/tmp/tmp.YjCuI4O4aJ" 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "keyring_file_add_key", 00:38:47.531 "params": { 00:38:47.531 "name": "key1", 00:38:47.531 "path": "/tmp/tmp.NENw5QNZPf" 00:38:47.531 } 00:38:47.531 } 00:38:47.531 ] 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "subsystem": "iobuf", 00:38:47.531 "config": [ 00:38:47.531 { 00:38:47.531 "method": "iobuf_set_options", 00:38:47.531 "params": { 00:38:47.531 "small_pool_count": 8192, 00:38:47.531 "large_pool_count": 1024, 00:38:47.531 "small_bufsize": 8192, 00:38:47.531 "large_bufsize": 135168, 00:38:47.531 "enable_numa": false 00:38:47.531 } 00:38:47.531 } 00:38:47.531 ] 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "subsystem": "sock", 00:38:47.531 "config": [ 00:38:47.531 { 00:38:47.531 "method": "sock_set_default_impl", 00:38:47.531 "params": { 00:38:47.531 "impl_name": "posix" 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "sock_impl_set_options", 00:38:47.531 "params": { 00:38:47.531 "impl_name": "ssl", 00:38:47.531 "recv_buf_size": 4096, 00:38:47.531 "send_buf_size": 4096, 00:38:47.531 "enable_recv_pipe": true, 00:38:47.531 "enable_quickack": false, 00:38:47.531 "enable_placement_id": 0, 00:38:47.531 "enable_zerocopy_send_server": true, 00:38:47.531 "enable_zerocopy_send_client": false, 00:38:47.531 "zerocopy_threshold": 0, 00:38:47.531 "tls_version": 0, 00:38:47.531 "enable_ktls": false 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "sock_impl_set_options", 00:38:47.531 "params": { 00:38:47.531 "impl_name": "posix", 00:38:47.531 "recv_buf_size": 2097152, 00:38:47.531 "send_buf_size": 2097152, 00:38:47.531 "enable_recv_pipe": true, 00:38:47.531 "enable_quickack": false, 00:38:47.531 "enable_placement_id": 0, 00:38:47.531 "enable_zerocopy_send_server": true, 00:38:47.531 "enable_zerocopy_send_client": false, 00:38:47.531 "zerocopy_threshold": 0, 00:38:47.531 "tls_version": 0, 00:38:47.531 "enable_ktls": false 00:38:47.531 } 00:38:47.531 } 00:38:47.531 ] 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "subsystem": "vmd", 00:38:47.531 "config": [] 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "subsystem": "accel", 00:38:47.531 "config": [ 00:38:47.531 { 00:38:47.531 "method": "accel_set_options", 00:38:47.531 "params": { 00:38:47.531 "small_cache_size": 128, 00:38:47.531 "large_cache_size": 16, 00:38:47.531 "task_count": 2048, 00:38:47.531 "sequence_count": 2048, 00:38:47.531 "buf_count": 2048 00:38:47.531 } 00:38:47.531 } 00:38:47.531 ] 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "subsystem": "bdev", 00:38:47.531 "config": [ 00:38:47.531 { 00:38:47.531 "method": "bdev_set_options", 00:38:47.531 "params": { 00:38:47.531 "bdev_io_pool_size": 65535, 00:38:47.531 "bdev_io_cache_size": 256, 00:38:47.531 "bdev_auto_examine": true, 00:38:47.531 "iobuf_small_cache_size": 128, 00:38:47.531 "iobuf_large_cache_size": 16 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "bdev_raid_set_options", 00:38:47.531 "params": { 00:38:47.531 "process_window_size_kb": 1024, 00:38:47.531 "process_max_bandwidth_mb_sec": 0 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "bdev_iscsi_set_options", 00:38:47.531 "params": { 00:38:47.531 "timeout_sec": 30 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "bdev_nvme_set_options", 00:38:47.531 "params": { 00:38:47.531 "action_on_timeout": "none", 00:38:47.531 "timeout_us": 0, 00:38:47.531 "timeout_admin_us": 0, 00:38:47.531 "keep_alive_timeout_ms": 10000, 00:38:47.531 "arbitration_burst": 0, 00:38:47.531 "low_priority_weight": 0, 00:38:47.531 "medium_priority_weight": 0, 00:38:47.531 "high_priority_weight": 0, 00:38:47.531 "nvme_adminq_poll_period_us": 10000, 00:38:47.531 "nvme_ioq_poll_period_us": 0, 00:38:47.531 "io_queue_requests": 512, 00:38:47.531 "delay_cmd_submit": true, 00:38:47.531 "transport_retry_count": 4, 00:38:47.531 "bdev_retry_count": 3, 00:38:47.531 "transport_ack_timeout": 0, 00:38:47.531 "ctrlr_loss_timeout_sec": 0, 00:38:47.531 "reconnect_delay_sec": 0, 00:38:47.531 "fast_io_fail_timeout_sec": 0, 00:38:47.531 "disable_auto_failback": false, 00:38:47.531 "generate_uuids": false, 00:38:47.531 "transport_tos": 0, 00:38:47.531 "nvme_error_stat": false, 00:38:47.531 "rdma_srq_size": 0, 00:38:47.531 "io_path_stat": false, 00:38:47.531 "allow_accel_sequence": false, 00:38:47.531 "rdma_max_cq_size": 0, 00:38:47.531 "rdma_cm_event_timeout_ms": 0, 00:38:47.531 "dhchap_digests": [ 00:38:47.531 "sha256", 00:38:47.531 "sha384", 00:38:47.531 "sha512" 00:38:47.531 ], 00:38:47.531 "dhchap_dhgroups": [ 00:38:47.531 "null", 00:38:47.531 "ffdhe2048", 00:38:47.531 "ffdhe3072", 00:38:47.531 "ffdhe4096", 00:38:47.531 "ffdhe6144", 00:38:47.531 "ffdhe8192" 00:38:47.531 ], 00:38:47.531 "rdma_umr_per_io": false 00:38:47.531 } 00:38:47.531 }, 00:38:47.531 { 00:38:47.531 "method": "bdev_nvme_attach_controller", 00:38:47.531 "params": { 00:38:47.531 "name": "nvme0", 00:38:47.532 "trtype": "TCP", 00:38:47.532 "adrfam": "IPv4", 00:38:47.532 "traddr": "127.0.0.1", 00:38:47.532 "trsvcid": "4420", 00:38:47.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:47.532 "prchk_reftag": false, 00:38:47.532 "prchk_guard": false, 00:38:47.532 "ctrlr_loss_timeout_sec": 0, 00:38:47.532 "reconnect_delay_sec": 0, 00:38:47.532 "fast_io_fail_timeout_sec": 0, 00:38:47.532 "psk": "key0", 00:38:47.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:47.532 "hdgst": false, 00:38:47.532 "ddgst": false, 00:38:47.532 "multipath": "multipath" 00:38:47.532 } 00:38:47.532 }, 00:38:47.532 { 00:38:47.532 "method": "bdev_nvme_set_hotplug", 00:38:47.532 "params": { 00:38:47.532 "period_us": 100000, 00:38:47.532 "enable": false 00:38:47.532 } 00:38:47.532 }, 00:38:47.532 { 00:38:47.532 "method": "bdev_wait_for_examine" 00:38:47.532 } 00:38:47.532 ] 00:38:47.532 }, 00:38:47.532 { 00:38:47.532 "subsystem": "nbd", 00:38:47.532 "config": [] 00:38:47.532 } 00:38:47.532 ] 00:38:47.532 }' 00:38:47.532 06:04:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.532 06:04:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:47.532 [2024-12-10 06:04:05.439087] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:38:47.532 [2024-12-10 06:04:05.439134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437356 ] 00:38:47.789 [2024-12-10 06:04:05.518960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.789 [2024-12-10 06:04:05.559216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.789 [2024-12-10 06:04:05.719477] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:48.468 06:04:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.468 06:04:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:48.468 06:04:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:48.468 06:04:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:48.468 06:04:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.766 06:04:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:48.766 06:04:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.766 06:04:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:48.766 06:04:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.766 06:04:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:49.025 06:04:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:49.025 06:04:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:49.025 06:04:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:49.025 06:04:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:49.283 06:04:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:49.283 06:04:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:49.283 06:04:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YjCuI4O4aJ /tmp/tmp.NENw5QNZPf 00:38:49.283 06:04:07 keyring_file -- keyring/file.sh@20 -- # killprocess 437356 00:38:49.283 06:04:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 437356 ']' 00:38:49.283 06:04:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 437356 00:38:49.283 06:04:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437356 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437356' 00:38:49.284 killing process with pid 437356 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@973 -- # kill 437356 00:38:49.284 Received shutdown signal, test time was about 1.000000 seconds 00:38:49.284 00:38:49.284 Latency(us) 00:38:49.284 [2024-12-10T05:04:07.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.284 [2024-12-10T05:04:07.243Z] =================================================================================================================== 00:38:49.284 [2024-12-10T05:04:07.243Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:49.284 06:04:07 keyring_file -- common/autotest_common.sh@978 -- # wait 437356 00:38:49.543 06:04:07 keyring_file -- keyring/file.sh@21 -- # killprocess 435631 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 435631 ']' 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 435631 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435631 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435631' 00:38:49.543 killing process with pid 435631 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@973 -- # kill 435631 00:38:49.543 06:04:07 keyring_file -- common/autotest_common.sh@978 -- # wait 435631 00:38:49.801 00:38:49.801 real 0m11.886s 00:38:49.801 user 0m29.499s 00:38:49.801 sys 0m2.794s 00:38:49.801 06:04:07 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.801 06:04:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:49.801 ************************************ 00:38:49.801 END TEST keyring_file 00:38:49.801 ************************************ 00:38:49.801 06:04:07 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:49.801 06:04:07 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:49.801 06:04:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:49.801 06:04:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.801 06:04:07 -- common/autotest_common.sh@10 -- # set +x 00:38:49.802 ************************************ 00:38:49.802 START TEST keyring_linux 00:38:49.802 ************************************ 00:38:49.802 06:04:07 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:49.802 Joined session keyring: 836057033 00:38:50.060 * Looking for test storage... 00:38:50.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:50.060 06:04:07 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:50.060 06:04:07 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:38:50.060 06:04:07 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:50.060 06:04:07 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:50.060 06:04:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:50.061 06:04:07 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.061 06:04:07 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:50.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.061 --rc genhtml_branch_coverage=1 00:38:50.061 --rc genhtml_function_coverage=1 00:38:50.061 --rc genhtml_legend=1 00:38:50.061 --rc geninfo_all_blocks=1 00:38:50.061 --rc geninfo_unexecuted_blocks=1 00:38:50.061 00:38:50.061 ' 00:38:50.061 06:04:07 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:50.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.061 --rc genhtml_branch_coverage=1 00:38:50.061 --rc genhtml_function_coverage=1 00:38:50.061 --rc genhtml_legend=1 00:38:50.061 --rc geninfo_all_blocks=1 00:38:50.061 --rc geninfo_unexecuted_blocks=1 00:38:50.061 00:38:50.061 ' 00:38:50.061 06:04:07 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:50.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.061 --rc genhtml_branch_coverage=1 00:38:50.061 --rc genhtml_function_coverage=1 00:38:50.061 --rc genhtml_legend=1 00:38:50.061 --rc geninfo_all_blocks=1 00:38:50.061 --rc geninfo_unexecuted_blocks=1 00:38:50.061 00:38:50.061 ' 00:38:50.061 06:04:07 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:50.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.061 --rc genhtml_branch_coverage=1 00:38:50.061 --rc genhtml_function_coverage=1 00:38:50.061 --rc genhtml_legend=1 00:38:50.061 --rc geninfo_all_blocks=1 00:38:50.061 --rc geninfo_unexecuted_blocks=1 00:38:50.061 00:38:50.061 ' 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.061 06:04:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.061 06:04:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.061 06:04:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.061 06:04:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.061 06:04:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:50.061 06:04:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:50.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:50.061 /tmp/:spdk-test:key0 00:38:50.061 06:04:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:50.061 06:04:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:50.061 06:04:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:50.061 06:04:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:50.061 /tmp/:spdk-test:key1 00:38:50.061 06:04:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=437818 00:38:50.061 06:04:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:50.061 06:04:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 437818 00:38:50.061 06:04:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 437818 ']' 00:38:50.061 06:04:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.061 06:04:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.061 06:04:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.061 06:04:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.061 06:04:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:50.320 [2024-12-10 06:04:08.053586] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:38:50.320 [2024-12-10 06:04:08.053634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437818 ] 00:38:50.320 [2024-12-10 06:04:08.129845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.320 [2024-12-10 06:04:08.171664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:50.578 06:04:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:50.578 [2024-12-10 06:04:08.399407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.578 null0 00:38:50.578 [2024-12-10 06:04:08.431462] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:50.578 [2024-12-10 06:04:08.431761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.578 06:04:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:50.578 1670921 00:38:50.578 06:04:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:50.578 905889306 00:38:50.578 06:04:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=437919 00:38:50.578 06:04:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 437919 /var/tmp/bperf.sock 00:38:50.578 06:04:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 437919 ']' 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:50.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.578 06:04:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:50.578 [2024-12-10 06:04:08.501763] Starting SPDK v25.01-pre git sha1 4fb5f9881 / DPDK 24.03.0 initialization... 00:38:50.578 [2024-12-10 06:04:08.501805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437919 ] 00:38:50.836 [2024-12-10 06:04:08.565422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.836 [2024-12-10 06:04:08.605857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.836 06:04:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.836 06:04:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:50.836 06:04:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:50.836 06:04:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:51.095 06:04:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:51.095 06:04:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:51.353 06:04:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:51.353 06:04:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:51.353 [2024-12-10 06:04:09.234601] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:51.353 nvme0n1 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:51.612 06:04:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:51.612 06:04:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:51.612 06:04:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:51.612 06:04:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.612 06:04:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.870 06:04:09 keyring_linux -- keyring/linux.sh@25 -- # sn=1670921 00:38:51.870 06:04:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:51.870 06:04:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:51.870 06:04:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 1670921 == \1\6\7\0\9\2\1 ]] 00:38:51.870 06:04:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1670921 00:38:51.871 06:04:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:51.871 06:04:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:51.871 Running I/O for 1 seconds... 00:38:53.245 21662.00 IOPS, 84.62 MiB/s 00:38:53.245 Latency(us) 00:38:53.245 [2024-12-10T05:04:11.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.245 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:53.245 nvme0n1 : 1.01 21663.16 84.62 0.00 0.00 5889.29 4868.39 10173.68 00:38:53.245 [2024-12-10T05:04:11.204Z] =================================================================================================================== 00:38:53.245 [2024-12-10T05:04:11.204Z] Total : 21663.16 84.62 0.00 0.00 5889.29 4868.39 10173.68 00:38:53.245 { 00:38:53.245 "results": [ 00:38:53.245 { 00:38:53.245 "job": "nvme0n1", 00:38:53.245 "core_mask": "0x2", 00:38:53.245 "workload": "randread", 00:38:53.245 "status": "finished", 00:38:53.245 "queue_depth": 128, 00:38:53.245 "io_size": 4096, 00:38:53.245 "runtime": 1.005855, 00:38:53.245 "iops": 21663.162185404457, 00:38:53.245 "mibps": 84.62172728673616, 00:38:53.245 "io_failed": 0, 00:38:53.245 "io_timeout": 0, 00:38:53.245 "avg_latency_us": 5889.288755436089, 00:38:53.245 "min_latency_us": 4868.388571428572, 00:38:53.245 "max_latency_us": 10173.683809523809 00:38:53.245 } 00:38:53.245 ], 00:38:53.245 "core_count": 1 00:38:53.245 } 00:38:53.245 06:04:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:53.245 06:04:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:53.245 06:04:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:53.245 06:04:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:53.245 06:04:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:53.245 06:04:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:53.245 06:04:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:53.245 06:04:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.502 06:04:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:53.502 06:04:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:53.502 06:04:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:53.502 06:04:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.502 06:04:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:53.502 06:04:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:53.502 [2024-12-10 06:04:11.440123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:53.502 [2024-12-10 06:04:11.441094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56580 (107): Transport endpoint is not connected 00:38:53.503 [2024-12-10 06:04:11.442088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56580 (9): Bad file descriptor 00:38:53.503 [2024-12-10 06:04:11.443089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:53.503 [2024-12-10 06:04:11.443099] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:53.503 [2024-12-10 06:04:11.443107] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:53.503 [2024-12-10 06:04:11.443114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:53.503 request: 00:38:53.503 { 00:38:53.503 "name": "nvme0", 00:38:53.503 "trtype": "tcp", 00:38:53.503 "traddr": "127.0.0.1", 00:38:53.503 "adrfam": "ipv4", 00:38:53.503 "trsvcid": "4420", 00:38:53.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:53.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:53.503 "prchk_reftag": false, 00:38:53.503 "prchk_guard": false, 00:38:53.503 "hdgst": false, 00:38:53.503 "ddgst": false, 00:38:53.503 "psk": ":spdk-test:key1", 00:38:53.503 "allow_unrecognized_csi": false, 00:38:53.503 "method": "bdev_nvme_attach_controller", 00:38:53.503 "req_id": 1 00:38:53.503 } 00:38:53.503 Got JSON-RPC error response 00:38:53.503 response: 00:38:53.503 { 00:38:53.503 "code": -5, 00:38:53.503 "message": "Input/output error" 00:38:53.503 } 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@33 -- # sn=1670921 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1670921 00:38:53.761 1 links removed 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@33 -- # sn=905889306 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 905889306 00:38:53.761 1 links removed 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 437919 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 437919 ']' 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 437919 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437919 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437919' 00:38:53.761 killing process with pid 437919 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 437919 00:38:53.761 Received shutdown signal, test time was about 1.000000 seconds 00:38:53.761 00:38:53.761 Latency(us) 00:38:53.761 [2024-12-10T05:04:11.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.761 [2024-12-10T05:04:11.720Z] =================================================================================================================== 00:38:53.761 [2024-12-10T05:04:11.720Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 437919 00:38:53.761 06:04:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 437818 00:38:53.761 06:04:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 437818 ']' 00:38:53.762 06:04:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 437818 00:38:53.762 06:04:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:53.762 06:04:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:53.762 06:04:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437818 00:38:54.020 06:04:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:54.020 06:04:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:54.020 06:04:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437818' 00:38:54.020 killing process with pid 437818 00:38:54.020 06:04:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 437818 00:38:54.020 06:04:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 437818 00:38:54.279 00:38:54.279 real 0m4.334s 00:38:54.279 user 0m8.184s 00:38:54.279 sys 0m1.387s 00:38:54.279 06:04:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:54.279 06:04:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:54.279 ************************************ 00:38:54.279 END TEST keyring_linux 00:38:54.279 ************************************ 00:38:54.279 06:04:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:54.279 06:04:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:54.279 06:04:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:54.279 06:04:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:54.279 06:04:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:54.279 06:04:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:54.279 06:04:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:54.279 06:04:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:54.279 06:04:12 -- common/autotest_common.sh@10 -- # set +x 00:38:54.279 06:04:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:54.279 06:04:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:54.279 06:04:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:54.279 06:04:12 -- common/autotest_common.sh@10 -- # set +x 00:38:59.553 INFO: APP EXITING 00:38:59.553 INFO: killing all VMs 00:38:59.553 INFO: killing vhost app 00:38:59.553 INFO: EXIT DONE 00:39:02.842 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:39:02.842 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:39:02.842 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:39:02.842 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:03.101 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:06.392 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:39:06.392 Cleaning 00:39:06.392 Removing: /var/run/dpdk/spdk0/config 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:06.392 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:06.392 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:06.392 Removing: /var/run/dpdk/spdk1/config 00:39:06.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:06.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:06.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:06.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:06.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:06.651 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:06.651 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:06.651 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:06.651 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:06.651 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:06.651 Removing: /var/run/dpdk/spdk2/config 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:06.651 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:06.651 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:06.651 Removing: /var/run/dpdk/spdk3/config 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:06.651 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:06.651 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:06.651 Removing: /var/run/dpdk/spdk4/config 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:06.651 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:06.651 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:06.651 Removing: /dev/shm/bdev_svc_trace.1 00:39:06.651 Removing: /dev/shm/nvmf_trace.0 00:39:06.651 Removing: /dev/shm/spdk_tgt_trace.pid4121584 00:39:06.651 Removing: /var/run/dpdk/spdk0 00:39:06.651 Removing: /var/run/dpdk/spdk1 00:39:06.651 Removing: /var/run/dpdk/spdk2 00:39:06.651 Removing: /var/run/dpdk/spdk3 00:39:06.651 Removing: /var/run/dpdk/spdk4 00:39:06.651 Removing: /var/run/dpdk/spdk_pid100006 00:39:06.651 Removing: /var/run/dpdk/spdk_pid100568 00:39:06.651 Removing: /var/run/dpdk/spdk_pid100577 00:39:06.651 Removing: /var/run/dpdk/spdk_pid100804 00:39:06.651 Removing: /var/run/dpdk/spdk_pid101028 00:39:06.651 Removing: /var/run/dpdk/spdk_pid101030 00:39:06.651 Removing: /var/run/dpdk/spdk_pid101929 00:39:06.651 Removing: /var/run/dpdk/spdk_pid102877 00:39:06.651 Removing: /var/run/dpdk/spdk_pid103955 00:39:06.651 Removing: /var/run/dpdk/spdk_pid104713 00:39:06.651 Removing: /var/run/dpdk/spdk_pid104715 00:39:06.651 Removing: /var/run/dpdk/spdk_pid104949 00:39:06.651 Removing: /var/run/dpdk/spdk_pid105964 00:39:06.651 Removing: /var/run/dpdk/spdk_pid107000 00:39:06.652 Removing: /var/run/dpdk/spdk_pid115648 00:39:06.652 Removing: /var/run/dpdk/spdk_pid144925 00:39:06.911 Removing: /var/run/dpdk/spdk_pid149922 00:39:06.911 Removing: /var/run/dpdk/spdk_pid151604 00:39:06.911 Removing: /var/run/dpdk/spdk_pid153313 00:39:06.911 Removing: /var/run/dpdk/spdk_pid153545 00:39:06.911 Removing: /var/run/dpdk/spdk_pid153702 00:39:06.911 Removing: /var/run/dpdk/spdk_pid153789 00:39:06.911 Removing: /var/run/dpdk/spdk_pid154287 00:39:06.911 Removing: /var/run/dpdk/spdk_pid156102 00:39:06.911 Removing: /var/run/dpdk/spdk_pid157077 00:39:06.911 Removing: /var/run/dpdk/spdk_pid157485 00:39:06.911 Removing: /var/run/dpdk/spdk_pid159643 00:39:06.911 Removing: /var/run/dpdk/spdk_pid160125 00:39:06.911 Removing: /var/run/dpdk/spdk_pid160842 00:39:06.911 Removing: /var/run/dpdk/spdk_pid165362 00:39:06.911 Removing: /var/run/dpdk/spdk_pid171409 00:39:06.911 Removing: /var/run/dpdk/spdk_pid171411 00:39:06.911 Removing: /var/run/dpdk/spdk_pid171413 00:39:06.911 Removing: /var/run/dpdk/spdk_pid175481 00:39:06.911 Removing: /var/run/dpdk/spdk_pid185461 00:39:06.911 Removing: /var/run/dpdk/spdk_pid189703 00:39:06.911 Removing: /var/run/dpdk/spdk_pid196141 00:39:06.911 Removing: /var/run/dpdk/spdk_pid197437 00:39:06.911 Removing: /var/run/dpdk/spdk_pid198870 00:39:06.911 Removing: /var/run/dpdk/spdk_pid200292 00:39:06.911 Removing: /var/run/dpdk/spdk_pid205460 00:39:06.911 Removing: /var/run/dpdk/spdk_pid210258 00:39:06.911 Removing: /var/run/dpdk/spdk_pid214763 00:39:06.911 Removing: /var/run/dpdk/spdk_pid223112 00:39:06.911 Removing: /var/run/dpdk/spdk_pid223168 00:39:06.911 Removing: /var/run/dpdk/spdk_pid228310 00:39:06.911 Removing: /var/run/dpdk/spdk_pid228537 00:39:06.911 Removing: /var/run/dpdk/spdk_pid228761 00:39:06.911 Removing: /var/run/dpdk/spdk_pid229169 00:39:06.911 Removing: /var/run/dpdk/spdk_pid229222 00:39:06.911 Removing: /var/run/dpdk/spdk_pid234086 00:39:06.911 Removing: /var/run/dpdk/spdk_pid235035 00:39:06.911 Removing: /var/run/dpdk/spdk_pid239847 00:39:06.911 Removing: /var/run/dpdk/spdk_pid242515 00:39:06.911 Removing: /var/run/dpdk/spdk_pid248192 00:39:06.911 Removing: /var/run/dpdk/spdk_pid25158 00:39:06.911 Removing: /var/run/dpdk/spdk_pid253984 00:39:06.911 Removing: /var/run/dpdk/spdk_pid263125 00:39:06.911 Removing: /var/run/dpdk/spdk_pid270811 00:39:06.911 Removing: /var/run/dpdk/spdk_pid270867 00:39:06.911 Removing: /var/run/dpdk/spdk_pid291809 00:39:06.911 Removing: /var/run/dpdk/spdk_pid292284 00:39:06.911 Removing: /var/run/dpdk/spdk_pid292804 00:39:06.911 Removing: /var/run/dpdk/spdk_pid293431 00:39:06.911 Removing: /var/run/dpdk/spdk_pid294163 00:39:06.911 Removing: /var/run/dpdk/spdk_pid294682 00:39:06.911 Removing: /var/run/dpdk/spdk_pid295313 00:39:06.911 Removing: /var/run/dpdk/spdk_pid295784 00:39:06.911 Removing: /var/run/dpdk/spdk_pid29663 00:39:06.911 Removing: /var/run/dpdk/spdk_pid300289 00:39:06.911 Removing: /var/run/dpdk/spdk_pid300519 00:39:06.911 Removing: /var/run/dpdk/spdk_pid307047 00:39:06.911 Removing: /var/run/dpdk/spdk_pid307320 00:39:06.911 Removing: /var/run/dpdk/spdk_pid313032 00:39:06.911 Removing: /var/run/dpdk/spdk_pid317736 00:39:06.911 Removing: /var/run/dpdk/spdk_pid328000 00:39:06.911 Removing: /var/run/dpdk/spdk_pid329063 00:39:06.911 Removing: /var/run/dpdk/spdk_pid333647 00:39:06.911 Removing: /var/run/dpdk/spdk_pid334033 00:39:06.911 Removing: /var/run/dpdk/spdk_pid338621 00:39:06.911 Removing: /var/run/dpdk/spdk_pid344617 00:39:07.171 Removing: /var/run/dpdk/spdk_pid347169 00:39:07.171 Removing: /var/run/dpdk/spdk_pid3564 00:39:07.171 Removing: /var/run/dpdk/spdk_pid358053 00:39:07.171 Removing: /var/run/dpdk/spdk_pid367631 00:39:07.171 Removing: /var/run/dpdk/spdk_pid369212 00:39:07.171 Removing: /var/run/dpdk/spdk_pid370122 00:39:07.171 Removing: /var/run/dpdk/spdk_pid387923 00:39:07.171 Removing: /var/run/dpdk/spdk_pid392313 00:39:07.171 Removing: /var/run/dpdk/spdk_pid395090 00:39:07.171 Removing: /var/run/dpdk/spdk_pid403252 00:39:07.171 Removing: /var/run/dpdk/spdk_pid403259 00:39:07.171 Removing: /var/run/dpdk/spdk_pid408894 00:39:07.171 Removing: /var/run/dpdk/spdk_pid410836 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4119464 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4120510 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4121584 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4122223 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4123158 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4123386 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4124348 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4124570 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4124780 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4126426 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4127700 00:39:07.171 Removing: /var/run/dpdk/spdk_pid412779 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4127983 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4128269 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4128580 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4128920 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4129128 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4129373 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4129653 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4130384 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4133344 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4133731 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4133938 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4134085 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4134573 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4134584 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4135070 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4135082 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4135339 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4135566 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4135819 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4135828 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4136388 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4136637 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4136927 00:39:07.171 Removing: /var/run/dpdk/spdk_pid413813 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4141107 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4145850 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4156998 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4157483 00:39:07.171 Removing: /var/run/dpdk/spdk_pid415902 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4162322 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4162678 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4167225 00:39:07.171 Removing: /var/run/dpdk/spdk_pid417023 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4173525 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4176197 00:39:07.171 Removing: /var/run/dpdk/spdk_pid4187303 00:39:07.171 Removing: /var/run/dpdk/spdk_pid426962 00:39:07.171 Removing: /var/run/dpdk/spdk_pid427435 00:39:07.171 Removing: /var/run/dpdk/spdk_pid428084 00:39:07.171 Removing: /var/run/dpdk/spdk_pid430458 00:39:07.171 Removing: /var/run/dpdk/spdk_pid430959 00:39:07.428 Removing: /var/run/dpdk/spdk_pid431530 00:39:07.428 Removing: /var/run/dpdk/spdk_pid435631 00:39:07.428 Removing: /var/run/dpdk/spdk_pid435817 00:39:07.428 Removing: /var/run/dpdk/spdk_pid437356 00:39:07.428 Removing: /var/run/dpdk/spdk_pid437818 00:39:07.428 Removing: /var/run/dpdk/spdk_pid437919 00:39:07.428 Removing: /var/run/dpdk/spdk_pid5919 00:39:07.428 Removing: /var/run/dpdk/spdk_pid6848 00:39:07.428 Removing: /var/run/dpdk/spdk_pid78583 00:39:07.428 Removing: /var/run/dpdk/spdk_pid84238 00:39:07.428 Removing: /var/run/dpdk/spdk_pid90457 00:39:07.428 Removing: /var/run/dpdk/spdk_pid97396 00:39:07.428 Removing: /var/run/dpdk/spdk_pid97404 00:39:07.428 Removing: /var/run/dpdk/spdk_pid98301 00:39:07.428 Removing: /var/run/dpdk/spdk_pid99201 00:39:07.428 Clean 00:39:07.428 06:04:25 -- common/autotest_common.sh@1453 -- # return 0 00:39:07.428 06:04:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:07.428 06:04:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:07.428 06:04:25 -- common/autotest_common.sh@10 -- # set +x 00:39:07.428 06:04:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:07.428 06:04:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:07.428 06:04:25 -- common/autotest_common.sh@10 -- # set +x 00:39:07.428 06:04:25 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:07.428 06:04:25 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:07.428 06:04:25 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:07.428 06:04:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:07.428 06:04:25 -- spdk/autotest.sh@398 -- # hostname 00:39:07.428 06:04:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:07.686 geninfo: WARNING: invalid characters removed from testname! 00:39:29.621 06:04:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:30.998 06:04:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:32.901 06:04:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:34.804 06:04:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:36.709 06:04:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:38.615 06:04:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:40.518 06:04:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:40.518 06:04:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:40.518 06:04:58 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:40.518 06:04:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:40.518 06:04:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:40.518 06:04:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:40.518 + [[ -n 4041229 ]] 00:39:40.518 + sudo kill 4041229 00:39:40.527 [Pipeline] } 00:39:40.542 [Pipeline] // stage 00:39:40.547 [Pipeline] } 00:39:40.561 [Pipeline] // timeout 00:39:40.566 [Pipeline] } 00:39:40.580 [Pipeline] // catchError 00:39:40.586 [Pipeline] } 00:39:40.600 [Pipeline] // wrap 00:39:40.606 [Pipeline] } 00:39:40.619 [Pipeline] // catchError 00:39:40.627 [Pipeline] stage 00:39:40.629 [Pipeline] { (Epilogue) 00:39:40.640 [Pipeline] catchError 00:39:40.642 [Pipeline] { 00:39:40.654 [Pipeline] echo 00:39:40.656 Cleanup processes 00:39:40.661 [Pipeline] sh 00:39:40.946 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:40.946 449190 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:40.959 [Pipeline] sh 00:39:41.242 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:41.242 ++ grep -v 'sudo pgrep' 00:39:41.242 ++ awk '{print $1}' 00:39:41.242 + sudo kill -9 00:39:41.242 + true 00:39:41.253 [Pipeline] sh 00:39:41.535 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:53.752 [Pipeline] sh 00:39:54.036 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:54.036 Artifacts sizes are good 00:39:54.051 [Pipeline] archiveArtifacts 00:39:54.058 Archiving artifacts 00:39:54.205 [Pipeline] sh 00:39:54.548 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:54.563 [Pipeline] cleanWs 00:39:54.573 [WS-CLEANUP] Deleting project workspace... 00:39:54.573 [WS-CLEANUP] Deferred wipeout is used... 00:39:54.580 [WS-CLEANUP] done 00:39:54.582 [Pipeline] } 00:39:54.599 [Pipeline] // catchError 00:39:54.611 [Pipeline] sh 00:39:54.893 + logger -p user.info -t JENKINS-CI 00:39:54.902 [Pipeline] } 00:39:54.916 [Pipeline] // stage 00:39:54.921 [Pipeline] } 00:39:54.936 [Pipeline] // node 00:39:54.941 [Pipeline] End of Pipeline 00:39:54.977 Finished: SUCCESS